This repository evaluates BCI methods on various tasks, datasets, and paradigms. The problem that is tackled depends on the paradigm and the evaluation.
BCI problems aim to discriminate between various active conditions of subjects that are recorded
using a neuroimaging device such as an EEG headband. The paradigm defines the various conditions,
or classes
Usually, multiple subjects are recorded for multiple sessions, during which they repeat several times the conditions. The choice of evaluation process defines which of these subjects and sessions are the training data and which are the test data:
- With the intra-session evaluation (Cross-Session), we use part of the trials from one session as the training data and the remaining trials as test data.
- With the inter-session evaluation (Within-Session), we use all the trials from various sessions for the same subject as training and evaluate on the trials from a different session but for the same subject. This makes it harder as the EEG device has moved between the two sessions, but it is still the the same individual who's recorded in the machine.
- The inter-subject evaluation (Cross-Subject) is even harder, as this time, one uses all the trials from different subjects to train the classifier and evaluate the trials from a subject that was not included in the training data. This tests the generalization capabilities of the algorithms.
For each of theses paradigms, we can evaluate all combinations of train/test trials, sessions, or subjects.
Finally, once the process to obtain test data
This benchmark can be run using the following commands:
$ pip install -U benchopt $ git clone https://github.com/benchopt/benchmark_bci $ benchopt run benchmark_bci
Apart from the problem, options can be passed to benchopt run
, to restrict the benchmarks to some solvers or datasets, e.g.:
$ benchopt run benchmark_bci -s MDM -d BNCI -r 1 -n 1
Use benchopt run -h
for more details about these options, or visit https://benchopt.github.io/api.html.