Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Riemannian classification of single-trial surface EEG and sources during checkerboard and navigational images in humans

  • Cédric Simar ,

    Roles Data curation, Formal analysis, Methodology, Software, Supervision, Validation, Writing – original draft, Writing – review & editing

    cedric.simar@ulb.be

    Affiliation Machine Learning Group, Computer Science Department, Faculty of Sciences, Université Libre de Bruxelles (ULB), Brussels, Belgium

  • Robin Petit,

    Roles Data curation, Formal analysis, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Machine Learning Group, Computer Science Department, Faculty of Sciences, Université Libre de Bruxelles (ULB), Brussels, Belgium, Interuniversity Institute of Bioinformatics in Brussels, Université Libre de Bruxelles- Vrije Universiteit Brussel, Brussels, Belgium

  • Nichita Bozga,

    Roles Data curation, Formal analysis, Writing – original draft

    Affiliation Machine Learning Group, Computer Science Department, Faculty of Sciences, Université Libre de Bruxelles (ULB), Brussels, Belgium

  • Axelle Leroy,

    Roles Conceptualization, Data curation, Funding acquisition, Methodology

    Affiliation Laboratory of Neurophysiology and Movement Biomechanics, Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium

  • Ana-Maria Cebolla,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Laboratory of Neurophysiology and Movement Biomechanics, Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium

  • Mathieu Petieau,

    Roles Data curation, Resources, Software

    Affiliation Laboratory of Neurophysiology and Movement Biomechanics, Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium

  • Gianluca Bontempi,

    Roles Data curation, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft

    Affiliation Machine Learning Group, Computer Science Department, Faculty of Sciences, Université Libre de Bruxelles (ULB), Brussels, Belgium

  • Guy Cheron

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliations Laboratory of Neurophysiology and Movement Biomechanics, Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium, Laboratory of Electrophysiology, Université de Mons-Hainaut, Mons, Belgium

Abstract

Objective

Different visual stimuli are classically used for triggering visual evoked potentials comprising well-defined components linked to the content of the displayed image. These evoked components result from the average of ongoing EEG signals in which additive and oscillatory mechanisms contribute to the component morphology. The evoked related potentials often resulted from a mixed situation (power variation and phase-locking) making basic and clinical interpretations difficult. Besides, the grand average methodology produced artificial constructs that do not reflect individual peculiarities. This motivated new approaches based on single-trial analysis as recently used in the brain-computer interface field.

Approach

We hypothesize that EEG signals may include specific information about the visual features of the displayed image and that such distinctive traits can be identified by state-of-the-art classification algorithms based on Riemannian geometry. The same classification algorithms are also applied to the dipole sources estimated by sLORETA.

Main results and significance

We show that our classification pipeline can effectively discriminate between the display of different visual items (Checkerboard versus 3D navigational image) in single EEG trials throughout multiple subjects. The present methodology reaches a single-trial classification accuracy of about 84% and 93% for inter-subject and intra-subject classification respectively using surface EEG. Interestingly, we note that the classification algorithms trained on sLORETA sources estimation fail to generalize among multiple subjects (63%), which may be due to either the average head model used by sLORETA or the subsequent spatial filtering failing to extract discriminative information, but reach an intra-subject classification accuracy of 82%.

1. Introduction

Since its discovery [1, 2], EEG has increasingly been used in fundamental, clinical, and industrial researches. For each of these domains, specific tools were successively developed. These tools include (i) the intracerebral recording with microelectrodes [3, 4] which allowed the recognition of the neuronal origin of EEG signals and a better understanding of the physiological mechanisms that underlie EEG activity; (ii) the grand averaging method, consisting in the average of a series of trials [5] triggered by a repetitive event (visual, auditory, somesthesic…), which opened the evoked-related potential (ERP) field studies, more recently enriched by EEG dynamics tools [6, 7] including EEG source generators [810]; and (iii) the use of EEG for neurofeedback and brain-computer interfaces (BCI) [11, 12]. In the past, these domains and their related tools evolved separately but the increasing accessibility of computational resources and experimental data motivated the development of transversal approaches and methodological bridges.

Visual evoked potentials (VEP) are a particular type of ERP, extracted from EEG signals recorded over the occipital cortex, that may be triggered by the display of different sorts of visual stimuli, from simple (e.g. checkerboard) [13, p., 14, 15] to more complex ones (e.g. human face, 3D or moving image) [14, 1620]. VEP are obtained by computing the grand average of numerous trials of ongoing EEG signals (see Eq 1), resulting in well-designed and easily recognizable potentials that are subsequently used to better understand the successive processing stages of visual inputs. However, these evoked responses result from at least two different mechanisms derived from the additive or the oscillation model [8, 2124]. For the additive model, the evoked responses result from sequential bottom-up processing of sensory inputs. This produces a specific sequence of monophasic evoked component peaks that are originally embedded in spontaneous EEG background. This latter EEG activity is considered as noise and ruled out by the subsequent averaging. For the oscillatory model, the evoked potential might result from phase-locking of the ongoing EEG rhythms within specific frequency bands. This EEG phase reorganization can be measured by the inter-trials coherency (ITC), as a response to the external stimulation. On the fundamental level, this measure is interesting only when there are no simultaneous variations (increase or decrease) of the related EEG powers. In this case, we are in presence of a pure phase-locking and the evoked responses are only due to a reorganization of the ongoing EEG oscillation. For example, this is the case of the N30 component of the somatosensory evoked potential for which 70% of amplitude is due to pure phase locking [25]. The fact that, in the majority of ERP studies, a mixed situation (power variation and phase-locking) occurs, makes basic and clinical interpretations difficult. Another disadvantage originates from the fact that, in the majority of evoked potentials studies, a grand average among a cohort of subjects is performed. Although the grand average method allows appropriate statistics [26] and practical conclusions about basic or clinical outcomes, it masks the individual peculiarities which may be critical from a clinical point of view. This problem is particularly crucial when diagnosis tools are based on grand average evoked potentials [27]. Similarly, the application of inverse modeling [10, 28] applied to grand average data provides very efficient recognition of the ERP generators [19, 2931] but does not facilitate the determination of individual characteristics. In the face of these shortcomings of classical ERP analysis, the introduction of machine learning tools based on different classification pipelines such as Riemannian geometry may allow better exploitation of single trials and individual characteristics in the evoked potentials domain [3241]. The relevance of considering single-trial analyses with neuroimaging data is also discussed in [42].

We hypothesize that the aforementioned ERP components of the EEG signals contain discriminative information characterizing the visual features of the image that can be identified, in single trials, by a state-of-the-art classification pipeline based on the canonical Riemannian geometry of covariance matrices. The main contribution of this study with respect to the state-of-the-art is the proposition of a methodological framework demonstrating that it is possible to gain insights into the classical evoked potentials fields by the application of recent BCI classification techniques [36], allowing the discrimination of visual evoked responses in a single-trial approach across multiple subjects, as opposed to the classical grand average approach based solely on mean EEG signals. For this, we use a classification pipeline based on xDAWN spatial filtering [33] and Riemannian geometry applied to single-trial EEG data recorded during a visual stimulation. Riemannian geometry classifiers have received growing attention in the last few years [43], particularly due to their first-class performance in international Brain-Computer Interface (BCI) competitions [44]. Besides, special attention is given to the potential advantage of introducing inverse modeling to the Riemannian classification pipeline. Furthermore, in this study, we compare the discriminative power of our framework when trained on each subject separately (hereunder referred to as intra-subject classification) or on all subjects indistinctively (hereunder referred to as inter-subject classification). Yet, the latter is paramount to estimate how the framework classification model can generalize to unseen subjects.

We show that our present methodology can effectively discriminate between single-trial EEG signals from different visual items presentation (Checkerboard versus 3D navigational image) with a classification accuracy of about 84% and 93% for the inter-subject and intra-subject condition respectively. These successful results motivate us to introduce the “Riemannian Single-Trial Analysis” (RSTA) approach, which interest will be further discussed in comparison to the Grand Average Analysis approach commonly used in neuroscience. Additionally, a comparative ERP and RSTA analysis on the grey images displayed between visual stimuli was performed and presented as S1 File in order to show that these images, typically considered neutral, actually carry discriminative information about the preceding visual stimulus presented to the subject.

2. Materials and methods

2.1 Participants

The data was collected from 15 healthy volunteers. All participants were right-handed, had no neurological condition, and normal vision, including 3D vision. Each participant gave informed consent to the experimental procedures. All experimental protocols were approved by the Ethic comity of Université Libre de Bruxelles, CHU Brugmann, and conducted in conformity with the European Union directive 2001/20/EC of the European Parliament.

2.2 Experimental paradigms

Participants were watching the EGA screen of an IBM Laptop (screen of 22.0 cm height, 30.3 cm width; refresh rate of 75 Hz, resolution of 800 x 600 pixels) centered on the line of gaze at a distance of 30 cm from the eyes through a cylindrical tunnel adapted to remove any external visual interferences, as previously used by our group [19]. We presented a sequence of checkerboards comprising 96 images intermixed with 96 grey images and a sequence of 3D-Tunnel presentations containing 192 images from four randomized corridor orientations (up, down, right, and left) giving an implicit illusion of virtual navigation intermixed with grey images (Fig 1). There was no break between recording sessions of Checkerboard and 3D-Tunnel patterns. The first recording session, either displaying Checkerboard or 3D-Tunnel patterns, was random and alternated between subjects.

thumbnail
Fig 1.

(A, B) Overview of the two different stimulation paradigms. A: paradigm 1, Checkerboard images (upper line) followed by grey screens. B: paradigm 2, 3D-Tunnels randomly presented in the 4 orientations (up, down, left, right), followed each time by a grey screen.

https://doi.org/10.1371/journal.pone.0262417.g001

2.3 Stimulation and recording parameters

All participants performed passive observation of the aforementioned visual stimuli. Within one recording run, either 96 Checkerboard patterns or 192 3D-Tunnels were presented to one subject, alternately with a uniform grey image (Fig 1A and 1B). An identical stimulation rate (1.0 Hz) was used in both conditions. Checkerboard stimulus consisted of black and white rectangles (4.5 x 4.0 cm) alternating 96 times with the grey page, which corresponded to full-contrast black and white squares (black field 15 lx; white field 101 lx).

The grey page luminance was about 43 lx. The 3D-Tunnel was non-stereoscopic but included perspective cues generated by the OpenGL graphic libraries [45] (Fig 1A and 1B). It represented a tunnel with stone-textured walls (stone dimension 1.25 cm2 at the periphery to 0.15 cm2 close to the center) in the form of a pipe with a constant circular cross-section. These different stimuli with a pattern contrast of about 50% display subtended 7°(w) × 5° (h) at the eye. Thus, both foveal and parafoveal retinal fields were stimulated. The luminance of the 3D-Tunnel evolved from 39 lx at the periphery to 74 lx close to the center. The presentation of each visual item (presentation time of 500 ms) was immediately followed by the presentation of a uniform grey image (also for 500 ms).

2.4 EEG data treatment

EEG data were recorded with an active-shield cap using 128 Ag / AgCl sintered ring electrodes and shielded co-axial cables (5–10 electrode system placements) comfortably adjusted to the participant’s head. All recordings were referenced to the left earlobe electrode. Vertical and horizontal eye movements (EOG) were recorded bipolarly. All electrode impedances were maintained below 5kΩ. Scalp potentials were amplified by ANT DC-amplifiers (ANT neuro system, the Netherlands) and digitized at a rate of 2.048 Hz using a resolution of 16 bit (range 11 mV). A band-pass filter from DC to 256 Hz and a notch filter (47.5–52.5 Hz) were also applied. Participants were asked to avoid eye blinks and to focus on the green dot presented in the middle of the screen to reduce eye artifacts. In order to verify the effectiveness of the eye fixation requirement, the number of eye movements was recorded throughout the different visual stimulation periods. For this, saccades including small saccades of about 0.8° and other eye movements were automatically selected by a Matlab (MathWorks Inc) script using eye velocity threshold. This selection was then verified by visual inspection. For all subjects, the fixation requirement was respected. Only 0.19 ± 0.09 saccades per second were recorded, irrespective of the type of image presented to the participant. Off-line data treatment and statistics were performed using EEGLAB software [6]. Artifactual portions of the EEG data were rejected after appropriate independent component analysis (ICA). Eventually, a zero-phase IIR bandpass filter with cut-off frequencies at 1 Hz and 45 Hz was applied and epochs containing samples from 0 ms before and 500 ms after the stimulus were extracted, which encompasses most of the information of the early visual potential components [16]. The averaged signal value of the pre-stimulus interval from -500 ms to 0 ms was divided from the epochs for baseline correction. More details about EEG data processing can be found in our previous study [19].

Besides, in order to facilitate future applications in clinical routines, only the following 12 electrodes uniformly distributed on the scalp have been used for classification: F3, Fz, F4, C3, Cz, C4, P3, Pz, P4, O1, Oz, and O2.

As the 3D-Tunnel can be presented in 4 different orientations, and as a decent amount of trials was required for each of them [31], the dataset originally presented an unbalanced number of Checkerboard and 3D-Tunnel trials. Thus, in order to avoid any bias in the classification results, for each subject separately, we applied a randomized undersampling, i.e. randomly removing trials from the majority class (3D-Tunnel), to obtain an equal number of Checkerboard and 3D-Tunnel trials.

2.5 XDAWN filtering and covariances

As described in Barachant (2014) [46], the xDAWN algorithm estimates a chosen number of spatial filters that enhance the signal-to-noise ratio of the evoked potentials for each class.

Let denote the number of electrodes, denote the trial of index i with 1 ≤ iN(k), T the number of time samples, and N(k) the number of trials from class k. Let denote the grand average of class k, or equivalently the VEP of the visual stimulus corresponding to class k, defined as: (1)

Let X denote the matrix representing the entire signal, obtained by concatenating all the trials from the two classes.

Each spatial filter is a vector and is estimated to increase the signal-to-noise ratio of its related class. In other words, for class k we have: (2)

These filters can be found by maximizing a generalized Rayleigh quotient, i.e. by solving the generalized eigenvalue problem on the matrices and XXT. Only the filters associated with the F highest eigenvalues are selected (F being the parameterizable number of xDAWN spatial filters).

Let denote the selected spatial filters for class k, and the aggregation of the spatial filters. The spatially filtered data is then defined by : (3)

We define a new matrix by concatenating the filtered averaged trials from both classes (0 and 1) with the spatially filtered data as follows: (4)

Finally, we can estimate the covariance matrix of : (5)

In particular, the shape of covariance matrices estimated after filtering the data is 4F × 4F. In order to main covariance matrices of shape 12 × 12, we fixed the number of filters F = 3.

In practice, ∑i is computed using a well-conditioned estimator such as Ledoit-Wolf [47] or OAS [48].

2.6 Tangent space mapping

In all our classification pipelines, we apply a tangent space mapping operator to the previously estimated covariance matrices and use the mapped result as the input to a logistic regression classifier.

Covariance matrices are symmetric positive definite (SPD) and therefore do not lie in a vector space but in a convex cone [49] which has a Riemannian manifold structure, i.e. for each point of the manifold, there is an associated tangent space where a dot product is defined. In particular, we consider the tangent space at the point ∑ref which corresponds to the geometric mean of the whole set of covariance matrices. More specifically, ∑ref is the point minimizing the average Fréchet distance to the set of covariance matrices. The choice of ∑ref is motivated by the observation from Tuzel et al., [50] that the geometric mean is the point where the mapping onto the tangent space leads to the best local approximation of the manifold.

Tangent space mapping is the action of projecting the SPD matrices from the manifold onto the associated tangent space. In the tangent space, the n × n covariance matrices will be represented by a vector of dimension n(n + 1)/2. This projection operator at the reference point ∑ref is defined by Barachant et al., [51] as the upper triangular matrix of where Φ(∑) is defined as: (6) where denotes the matrix logarithm of ∑ with respect to ∑ref.

2.7 Source analysis

The objective of source analysis is to improve spatial resolution, increase SNR, and detect subcortical activities not directly observable on scalp EEG. In this work, we estimated cortical and subcortical sources using the non-parametric inverse method Standardized Low Resolution Electromagnetic Tomography (sLORETA) [9, 28]. This work used the sLORETA estimation implemented in the MNE Python library [52] and which is formally defined as follows. Let be the source estimation matrix: where is the number of electrodes at the scalp surface, is the number of sources within the brain volume, is the EEG signal matrix, is the lead field matrix and α > 0 is a regularization parameter.

Consider , the covariance matrix of defined as: is a 3M × 3M square matrix with M diagonal blocks of shape 3 × 3 denoted by ∑1, …, ∑M. The normalized sLORETA source estimation of the voxel is therefore given by: where T(α) is the th 3 × E row block of T(α).

2.8 Classification pipelines

This section describes the classification pipelines illustrated in Fig 2. The accuracy of each inter-subject pipeline was estimated using a Leave-One-Subject-Out Cross-Validation (LOSO-CV) where each subject out of the 15 has been used for the validation of the classification algorithm trained using the remaining 14 subjects. Similarly, the accuracy of each intra-subject pipeline was estimated using a 4-Fold Cross-Validation where, for each subject, a quarter of trials were used for validation of the classification algorithm using the remaining three quarters. xDAWN spatial filters were estimated on the training set and applied to the validation set.

thumbnail
Fig 2. Flow chart of the classification pipeline with and without inverse modeling.

An asterisk in the upper-right corner denotes that the pipeline step is supervised.

https://doi.org/10.1371/journal.pone.0262417.g002

The classification pipeline of both inter-subject and intra-subject EEG signals from the 12 electrodes of 3D-Tunnel versus Checkerboard visual stimuli without inverse modeling includes (i) xDAWN spatial filtering operating on the EEG signals from all 12 electrodes, (ii) the estimation of covariance matrices, (iii) the projection onto the tangent space (TS) and (iv) a logistic regression (LR) classifier.

The classification pipeline of both inter-subject and intra-subject EEG signals of 3D-Tunnel versus Checkerboard with inverse modeling includes (i) the estimation of cortical sources based on EEG signals using the Desikan-Killiany atlas [53, 54], (ii) the averaging of cortical sources activations per atlas region, (iii) xDAWN spatial filtering operating on the 75 averaged sources activations, (iv) the estimation of covariance matrices, (v) the projection onto the tangent space (TS) and (vi) a logistic regression (LR) classifier.

The results of these classification pipelines were reported using the following metrics: ROC curves, Area Under the ROC Curve (AUC), Precision-Recall curves, Average Precision (AP), confusion matrices, and Matthews correlation coefficient (MCC).

The main Python libraries used for the implementation of these pipelines are MNE [52, 55], NumPy [56], SciPy [57], and scikit-learn [58].

3. Results

3.1 ERP analysis

Fig 3 illustrates the grand average ERP differences observed between the different visual stimulations, highlighting the main characteristics of the early evoked components at around 100 ms (P100) for the Checkerboard (thick blue line) or the 3D-Tunnel (thick red line) presentation. As previously observed by our group [19], the P100 component evoked by the Checkerboard was of higher amplitude than the P100 evoked by the 3D-Tunnel, which presented a biphasic configuration during the time of the monophasic classical P100 related to the Checkerboard. These earlier evoked components were followed by a P220 in the EEG signals of 3D-Tunnels and Checkerboards. The grand average signal of each class k refers to the row corresponding to the Oz electrode of the matrix P(k) as defined in Eq 1.

thumbnail
Fig 3.

(A) Superimposition of EEG signals recorded on the occipital area (electrode Oz) from all the single trials of Tunnels and Checkerboards stimuli of one representative subject on which the grand average signals corresponding to the Tunnel (red lines, n = 192) and the Checkerboard (blue lines, n = 96) stimuli are superimposed. (B) Superimposition of EEG signals recorded on electrode Oz from 100 randomly selected trials of Tunnel stimuli and 100 randomly selected trials of Checkerboard stimuli throughout all the subjects on which the grand average signals of all trials and all subjects corresponding to the Tunnel stimuli (red lines, n = 2880 (192 trials x 15 subjects)) and the Checkerboard (blue lines, n = 1440 (96 trials x 15 subjects)) are superimposed.

https://doi.org/10.1371/journal.pone.0262417.g003

When confronted with trials coming from a single subject, differences can be visible by human visual inspection. For example, in Fig 3A we can observe differences between the single trials of Tunnels and Checkerboards between 100 and 200 ms, with the Tunnels having a noticeably lower amplitude in the selected time limits. In contrast, the discrimination by visual inspection between the two stimuli was not possible anymore when randomly chosen single trials originating from all subjects were superimposed (Fig 3B). This figure illustrates the fact that the discrimination was not possible by human visual inspection and is well representative of the difficulty of the classification task. Therefore, we may expect higher accuracies from a classifier trained and evaluated with intra-subject EEG signals compared to inter-subject EEG signals.

3.2 Classification performance

3.2.1 The effect of the number and position of the electrodes.

Prior to the main classification task, we first determined if sufficient discriminative information existed with regards to the number and position of the 12 arbitrarily selected electrodes. The number of electrodes was empirically validated by quantifying their impact on the classification accuracy using a forward feature selection method. For this, we computed the distribution of classification accuracy with respect to the number of electrodes (Fig 4A) using a LOSO Cross-Validation. As a result, Fig 4A illustrates that the classification score for the Tunnel vs. Checkerboard reaches a global maximum using 9 electrodes. Considering that the positions of the 12 electrodes are uniformly distributed on the scalp and that the cross-validated classification accuracy reached a plateau before the 12th electrode, we can reasonably conclude that adding EEG signals from more electrodes will not substantially increase the amount of non-artifactual information.

thumbnail
Fig 4. Distribution of the Tunnel vs Checkerboard inter-subject cross-validated classification accuracy without inverse modeling with respect to the number of electrodes (A) and by scalp region (B) using only 3 electrodes from each locus (O1, O2, and Oz for the occipital locus, P3, P4, and Pz for the parietal locus, C3, C4, and Cz for the central locus, F3, F4 and Fz for the frontal locus).

https://doi.org/10.1371/journal.pone.0262417.g004

Furthermore, in order to test which scalp region contains the most discriminative information, we used a triad of electrodes corresponding to the occipital, parietal, central, and frontal regions on which the classification pipeline was applied. Fig 4B illustrates this result for the Tunnel vs Checkerboard classification. The classification pipeline trained on the single trials of all subjects indistinctively reaches the best score for the occipital electrodes (75%), followed by the parietal electrodes (64%), and followed by the central and frontal electrodes (respectively 56% and 59%).

3.2.2 Classification without inverse modeling.

The main results of the classifications pipelines were illustrated by the ROC curve (along with the AUC), the Precision-Recall curve (along with the Average Precision), and the confusion matrix. Fig 5 illustrates these performance metrics computed on the results from the inter-subject and intra-subject classification pipelines without inverse modeling. The ROC curve for the intra-subject discrimination is more arched than the curve for the inter-subject discrimination (Fig 5A), which is confirmed by the higher AUC of the first (0.98) compared to the latter (0.92). These results were further validated by the Precision-Recall curves (Fig 5B). Besides, the confusion matrices (Fig 5C) show a balanced recognition accuracy for Tunnels and Checkerboards, although, as expected, both accuracies are significantly higher for the intra-subject classification (Fig 5C right) than the inter-subject classification (Fig 5C left). The inter-subject and intra-subject classifiers without inverse modeling reached an accuracy of 84% and 93% respectively, as well as a Matthews correlation coefficient (MCC) of 0.668 and 0.860 respectively.

thumbnail
Fig 5.

ROC (A) and Precision-Recall (B) curves for the intra-subject and inter-subject classification of Tunnel vs Checkerboard without inverse modeling. (C) Confusion matrices for the inter-subject (left) and intra-subject (right) classification of Tunnel vs Checkerboard without inverse modeling.

https://doi.org/10.1371/journal.pone.0262417.g005

3.2.3 Classification with inverse modeling.

When working with inverse modeling on the discrimination of Tunnel vs Checkerboard, the inter-subject and intra-subject classification pipelines performed very differently on the same dataset. When applied to inter-subject data, the classification pipeline reached an accuracy of 63%. However, the intra-subject approach has yielded significantly better results, reaching an accuracy of 82%. Similarly to section 3.2.2, the major differences in accuracy between the two classification tasks are illustrated in Fig 6 using ROC curves and their respective AUC (Fig 6A), Precision-Recall curves (Fig 6B) as well as confusion matrices (Fig 6C). Additionally, the inter-subject and intra-subject classifiers with inverse modeling reached a MCC of 0.26 and 0.63 respectively.

thumbnail
Fig 6.

ROC (A) and Precision-Recall (B) curves for the intra-subject and inter-subject classification of Tunnel vs Checkerboard with inverse modeling. (C) Confusion matrices for the inter-subject (left) and intra-subject (right) classification of Tunnel vs Checkerboard with inverse modeling.

https://doi.org/10.1371/journal.pone.0262417.g006

We further explored the intra-subject classification by exploring the sources activations of a representative subject around 140 and 220 ms. As illustrated in Fig 7A and 7B, sources activations seem to be higher during the visualization of the Tunnels than that of the Checkerboards. For the Tunnel, two main zones were highlighted (yellow), one in the occipital cortex and the other in the sensory-motor areas extending towards the frontal cortex. For the Checkerboard, the occipital region was also identified as well as the frontal cortex but less intensely. Although some subjects exhibited a less contrasting pattern as illustrated in Fig 7D and 7E, the differences remain in favor of higher cortical contribution for the Tunnel (occipital and frontal zones identified) while only the occipital region was identified for the Checkerboard. Fig 7C and 7F represents the activations of labeled cortical source areas as defined in the Desikan-Killiany atlas corresponding to the 3D brain visualizations from Fig 7A, 7B, 7D and 7E respectively.

thumbnail
Fig 7. Lateral view of the mean sources activations normalized estimations (A, B, D, and E) and the corresponding matrix representation of the atlas regions (C and F) labeled as defined in the Desikan-Killiany atlas at two different times in a single representative subject; 140 ms (A and B) and 220 ms (D and E) after the stimuli, during the visualization of Tunnels (A and D) and Checkerboards (B and E).

Sources activations are averaged from 10 milliseconds before to 10 milliseconds after the N140 (140 ms) and the P220 (220 ms) respectively.

https://doi.org/10.1371/journal.pone.0262417.g007

3.2.4 Summary of results.

The boxplots in Fig 8 summarizes the comparative analysis of the classification pipelines with respect to the intra-subject or inter-subject conditions. The classification results for the intra-subject and inter-subject classification pipelines were computed using a 4-fold cross-validation and a Leave-One-Subject-Out Cross-Validation respectively. We observed that the accuracy for each classification task was significantly higher for the intra-subject compared to the inter-subject condition (p < 0.001). Additionally, we observed a statistical difference between the classification accuracy of the Tunnel vs Checkerboard with and without inverse modeling in the inter-subject condition (p < 0.001) as well as in the intra-subject condition (p < 0.001). These statistical comparisons were computed using a Mann–Whitney U test.

thumbnail
Fig 8. Summary of the accuracies reached by the inter-subject and intra-subject classification pipelines based on EEG signals and (sub)cortical sources of Tunnel and Checkerboard visual stimuli.

https://doi.org/10.1371/journal.pone.0262417.g008

3.2.5 Comparison of classification pipelines.

As suggested by a reviewer, after the comparative analysis in section 2.8, we compare here the classification accuracy of our approach with other classification pipelines using the same cross-validation methodology. The purposes of this comparison are to (i) quantify the benefits of applying xDAWN filtering and projecting covariance matrices onto the tangent space, and (ii) establish the robustness and generalizability of our approach with respect to other classification pipelines.

To that end, Fig 9 compares the mean cross-validated accuracy, with respect to the intra-subject or inter-subject conditions, computed on the following classification pipelines with and without inverse modelling:

thumbnail
Fig 9. Comparison of the mean accuracies reached by different inter-subject and intra-subject classification pipelines based on EEG signals and (sub)cortical sources.

The asterisk designates the classification pipeline used in our approach.

https://doi.org/10.1371/journal.pone.0262417.g009

  • LR on vectorized EEG signals/sources activations: the EEG signals or cortical sources activations are unrolled into a 1-dimensional vector and classified using a logistic regression (LR).
  • Covariance matrices and LR: covariance matrices of EEG signals or cortical sources are estimated, unrolled into a 1-dimensional vector and classified using a logistic regression.
  • Covariance matrices, TS and LR: covariance matrices of EEG signals or cortical sources are estimated, projected onto the tangent space and classified using a logistic regression.
  • xDAWN covariance and LR: covariance matrices are estimated from the result of xDAWN spatial filtering applied on EEG signals or cortical sources, unrolled into a 1-dimensional vector and classified using a logistic regression.
  • CSP and LR: the output of the Common Spatial Patterns (CSP) algorithm [59, 60] applied on EEG signals or cortical sources is classified using a logistic regression.
  • xDAWN covariance, TS and LR: covariance matrices are estimated from the result of xDAWN spatial filtering applied on EEG signals or cortical sources, projected onto the tangent space and classified using a logistic regression.

The comparison of classification results from these different pipelines illustrated in Fig 9 highlights several interesting patterns. Firstly, we can observe that the classification pipeline used in our approach outperforms the other pipelines in both intra-subject and inter-subject conditions, using either EEG signals or cortical sources. Secondly, we observe that the use of xDAWN spatial filtering significantly improves the classification accuracy in the inter-subject condition (p < 0.001), i.e. inter-subject generalizability, but not in the intra-subject condition (p > 0.001). Whereas, conversely, the use of the projection onto the tangent space significantly improves the classification accuracy in the intra-subject condition (p < 0.001) but not in the inter-subject condition (p > 0.001), although this latter absence of accuracy improvement in the inter-subject condition may also be explained by the difficulty to discriminate inter-subject EEG signals and cortical sources without xDAWN spatial filtering for this task. Lastly, we observe that the classification pipeline used in our approach significantly outperforms the CSP algorithm in the inter-subject condition (p < 0.001) but not in the intra-subject condition (p > 0.001). These statistical comparisons were computed using a Mann–Whitney U test.

4. Discussion

In this work, we leveraged the BCI methodology and related mathematical tools in order to better analyze the EEG signals commonly summarized and interpreted by the analysis of the evoked potentials components. Since the renewing of the evoked potentials interpretation following the oscillation models by Makeig et al., [9] and the introduction of EEGLab software [6], the dynamic aspects of the event-related potentials (ERP) contributed to better access to the underlying neurophysiological mechanisms sustaining the ERP components. At the same time, the development of the BCI domain [12, 61] necessitated online procedures in order to extract pertinent neural information from noisy environments. As BCI procedures are specifically devoted to work with only one subject at a time, the BCI tools are well appropriate to decipher individual EEG-evoked responses and to extend our understanding beyond grand average representations of the ERP.

We show that it is possible to effectively discriminate between the display of different visual items (Checkerboard versus 3D navigational image) in single related EEG signals corresponding to one subject. Besides, the present methodology allows us to demonstrate that the single-trial discrimination based on Riemannian classification pipelines can be generalized to multiple subjects constituting a single batch of single trials. In other words, we may here introduce the “Riemannian Single-Trial Analysis” (RSTA) approach, as opposed to the Grand Average Analysis (GAA) approach commonly used in clinical neuroscience. Although the latter is useful for highlighting statistically significant differences between subject populations, GAA is prone to capture artificial constructs due to individual peculiarities. Indeed, it is possible that averaged evoked components may be sign inverted or phase jittered depending on the subjects included in the grand average. Therefore, these statistically significant differences may be spurious and may not represent the underlying physiological characteristics of the observed evoked potential resulting from the GAA. Besides, a statistical difference induced by the GAA may not necessarily imply physiological plausibility nor genuine discriminative property that should be the basis of a clinical application. As opposed to the GAA, the objective of the RSTA approach is to allow, using interpretability techniques on Riemannian classification pipelines, the identification and extraction of subject-, task-, and trial-specific underlying discriminative patterns that can unlock a deeper understanding of the fundamental neurophysiological mechanisms.

It was shown in previous studies [6266] that the single-trial configuration was able to discriminate cognitive evoked components, allowing the possibility to detect the intermittency of cognitive events during repetitive stimulation. Although the evoked components are not visually distinguishable in the present single-trial configuration, as displayed in Fig 3B, we show that our classification pipeline based on covariance matrices and Riemannian geometry is able to discriminate between the presentation of the two displays, reaching a single-trial classification accuracy of about 84% and 93% for inter-subject and intra-subject classification respectively.

Despite the methodological difference about the paradigm and the type of evoked potentials or brain signal recordings, the results of the present classification pipelines based on Riemannian geometry corroborate the single-trial classification performances of similar Riemannian approaches applied on objects familiarity recognition [67], motor imagery [68, 69], P300 BCI [70] and EEG respiratory states [71].

Using inverse modeling based on swLORETA [10], we previously showed that, whatever the presented image (Checkerboard or Tunnel), the same generators of the P100 were located in the occipital cortex (BA18 and BA19) and the right inferior temporal cortex (BA20). However, the left fusiform cortex (BA37) was additionally recruited only for the Checkerboard but not for the 3D-Tunnel. In the present study, in spite of the fact that a limited number of electrodes (n = 12) was used, inverse modeling based on sLORETA mainly reproduces these results using the Desikan-Killiany atlas [53, 54] but the inter-subject classification pipeline based on sLORETA sources estimations reaches significantly lower accuracy when using estimated sources (63%) rather than EEG signals (84%). In contrast, the same classification method applied to intra-subject EEG reaches an accuracy and an AUC of 82% and 0.92 respectively. This performance is comparable to the intra-subject classification performances based on EEG signals (93%). The lower performance of inter-subject classification using sLORETA sources estimations may result from either (i) the morphological differences between the subject-specific cortical dipoles orientation and the average head model used to compute the sLORETA lead field matrix that may have blurred the signal differentiation between the two images or (ii) the subsequent xDAWN spatial filtering not being able to extract discriminative information that may be located in different, subject-specific, cortical atlas regions. Conversely, this problem does not occur when only one subject is considered. This indicates in some way the limit of the sensitivity of the present method. However, the present study shows that such Riemannian classification pipelines allow, on the one hand, scalp EEG signal discrimination throughout multiple subjects which could be used in an object visual recognition BCI and, on the other hand, EEG neural generators discrimination in a single subject which could be used in a clinical context. This latter aspect is in line with the present trend to favor the decoding of the individual brain activity [72, 73] which outperforms the use of the grand average from multiple subjects.

4.1 Limitations

There are several limitations to the current study. Firstly, the limited number of electrodes (n = 12) used, to favor clinical and BCI applications, may have reduced the precision of the 3D inverse modeling estimation performed with sLORETA and may thus have impacted the related classification performance. Secondly, although the number of subjects (n = 15) was limited compared to other EEG studies, the dataset, consisting of the number of trials per subject times the number of subjects, was considered sufficient to obtain meaningful inter-subject classification results using the LOSO-cross-validated RSTA approach. Nevertheless, further studies with a larger number of participants will be necessary in order to generalize to population. Thirdly, we are also aware that the presented results are directed towards the global aspect of the visual image, though the two different images presented different physical characteristics (bottom-up features such as luminescence and contrast) and different representational contents (top-down influence). The distinctive influence of these physical and neuro-cognitive factors on the Riemannian classification pipelines will be further investigated. Future research could also be oriented towards new stimulation paradigms in order to differentiate EEG signals evoked by images with similar physical characteristics but different representational contents. Besides, in order to better understand the fundamental underlying neurophysiological mechanisms, future research may also be dedicated to quantifying the importance of the discriminative patterns identified by the RSTA approach.

5. Conclusion

This work shows that classification pipelines based on Riemannian geometry can effectively discriminate between the display of different visual items (Checkerboard versus 3D navigational image) in single EEG trials. The presented methodology reaches a single-trial classification accuracy of about 84% and 93% for inter-subject and intra-subject classification respectively using surface EEG, which was shown to be significantly better than the accuracy of other state-of-the-art pipelines. Furthermore, the classification algorithms trained on sLORETA estimation fail to generalize among multiple subjects (63% accuracy) but reach an intra-subject classification accuracy of 82% which allows future functional links between neuro-imagery and EEG dynamics discrimination. In this context, the development of new advanced mathematical frameworks will be paramount to develop a deeper understanding of the underlying communicational dynamics inside the neural network topography upon which the Riemannian classification pipeline performances emerged.

Acknowledgments

The authors thank the fund Leibu and the Brain & Society foundation for their support as well as T. D’Angelo, M. Dufief, E. Hortmanns, E. Pecoraro, and E. Toussaint for expert technical assistance.

References

  1. 1. Berger H., “Über das Elektrenkephalogramm des Menschen,” Arch. Für Psychiatr. Nervenkrankh., vol. 87, no. 1, pp. 527–570, Dec. 1929,
  2. 2. ADRIAN E. D. and MATTHEWS B. H. C., “THE BERGER RHYTHM: POTENTIAL CHANGES FROM THE OCCIPITAL LOBES IN MAN,” Brain, vol. 57, no. 4, pp. 355–385, Dec. 1934,
  3. 3. Mountcastle V. B., “Modality and topographic properties of single neurons of cat’s somatic sensory cortex,” J. Neurophysiol., vol. 20, no. 4, pp. 408–434, Jul. 1957, pmid:13439410
  4. 4. Poulet J. F. A. and Petersen C. C. H., “Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice,” Nature, vol. 454, no. 7206, pp. 881–885, Aug. 2008, pmid:18633351
  5. 5. Dawson G. D., “A summation technique for the detection of small evoked potentials,” Electroencephalogr. Clin. Neurophysiol., vol. 6, pp. 65–84, Jan. 1954, pmid:13141922
  6. 6. Delorme A. and Makeig S., “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” J. Neurosci. Methods, vol. 134, no. 1, pp. 9–21, Mar. 2004, pmid:15102499
  7. 7. Pernet C. R., Martinez-Cancino R., Truong D., Makeig S., and Delorme A., “From BIDS-Formatted EEG Data to Sensor-Space Group Results: A Fully Reproducible Workflow With EEGLAB and LIMO EEG,” Front. Neurosci., vol. 14, 2021, pmid:33519362
  8. 8. Makeig S., et al., “Dynamic brain sources of visual evoked responses,” Science, vol. 295, no. 5555, pp. 690–694, Jan. 2002, pmid:11809976
  9. 9. Pascual-Marqui R. D., Michel C. M., and Lehmann D., “Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain,” Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol., vol. 18, no. 1, pp. 49–65, Oct. 1994,
  10. 10. Palmero-Soler E., Dolan K., Hadamschek V., and Tass P. A., “swLORETA: a novel approach to robust source localization and synchronization tomography,” Phys. Med. Biol., vol. 52, no. 7, pp. 1783–1800, Apr. 2007, pmid:17374911
  11. 11. Fetz E. E., “Operant conditioning of cortical unit activity,” Science, vol. 163, no. 3870, pp. 955–958, Feb. 1969, pmid:4974291
  12. 12. Wolpaw J. R., Birbaumer N., McFarland D. J., Pfurtscheller G., and Vaughan T. M., “Brain-computer interfaces for communication and control,” Clin. Neurophysiol. Off. J. Int. Fed. Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, Jun. 2002, pmid:12048038
  13. 13. Kurita-Tashima S., Tobimatsu S., Nakayama-Hiromatsu M., and Kato M., “Effect of check size on the pattern reversal visual evoked potential,” Electroencephalogr. Clin. Neurophysiol., vol. 80, no. 3, pp. 161–166, Jun. 1991, pmid:1713147
  14. 14. Cheron G., et al., “Gravity influences top-down signals in visual processing,” PloS One, vol. 9, no. 1, p. e82371, 2014, pmid:24400069
  15. 15. Shigihara Y., Hoshi H., and Zeki S., “Early visual cortical responses produced by checkerboard pattern stimulation,” NeuroImage, vol. 134, pp. 532–539, 01 2016, pmid:27083528
  16. 16. Di Russo F., Martínez A., Sereno M. I., Pitzalis S., and Hillyard S. A., “Cortical sources of the early components of the visual evoked potential,” Hum. Brain Mapp., vol. 15, no. 2, pp. 95–111, Feb. 2002. pmid:11835601
  17. 17. Rossion B. and Caharel S., “ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception,” Vision Res., vol. 51, no. 12, pp. 1297–1311, Jun. 2011, pmid:21549144
  18. 18. Baijot S., et al., “EEG Dynamics of a Go/Nogo Task in Children with ADHD,” Brain Sci., vol. 7, no. 12, Dec. 2017, pmid:29261133
  19. 19. Leroy A., Cevallos C., Cebolla A.-M., Caharel S., Dan B., and Cheron G., “Short-term EEG dynamics and neural generators evoked by navigational images,” PloS One, vol. 12, no. 6, p. e0178817, 2017, pmid:28632774
  20. 20. Langeslag S. J. E. and van Strien J. W., “Early visual processing of snakes and angry faces: An ERP study,” Brain Res., vol. 1678, pp. 297–303, Jan. 2018, pmid:29102778
  21. 21. Hanslmayr S., et al., “Alpha phase reset contributes to the generation of ERPs,” Cereb. Cortex N. Y. N 1991, vol. 17, no. 1, pp. 1–8, Jan. 2007, pmid:16452640
  22. 22. Sauseng P. and Klimesch W., “What does phase information of oscillatory brain activity tell us about cognitive processes?,” Neurosci. Biobehav. Rev., vol. 32, no. 5, pp. 1001–1013, Jul. 2008, pmid:18499256
  23. 23. Freunberger R., Fellinger R., Sauseng P., Gruber W., and Klimesch W., “Dissociation between phase-locked and nonphase-locked alpha oscillations in a working memory task,” Hum. Brain Mapp., vol. 30, no. 10, pp. 3417–3425, Oct. 2009, pmid:19384888
  24. 24. Iemi L., et al., “Multiple mechanisms link prestimulus neural oscillations to sensory responses,” eLife, vol. 8, Jun. 2019, pmid:31188126
  25. 25. Cheron G., et al., “Pure phase-locking of beta/gamma oscillation contributes to the N30 frontal component of somatosensory evoked potentials,” BMC Neurosci., vol. 8, p. 75, Sep. 2007, pmid:17877800
  26. 26. Delorme A., Miyakoshi M., Jung T.-P., and Makeig S., “Grand average ERP-image plotting and statistics: A method for comparing variability in event-related single-trial EEG activities across subjects and conditions,” J. Neurosci. Methods, vol. 250, pp. 3–6, Jul. 2015, pmid:25447029
  27. 27. Kappenman E. S. and Luck S. J., “Best Practices for Event-Related Potential Research in Clinical Populations,” Biol. Psychiatry Cogn. Neurosci. Neuroimaging, vol. 1, no. 2, pp. 110–115, Mar. 2016, pmid:27004261
  28. 28. Pascual-Marqui R. D., “Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details,” Methods Find. Exp. Clin. Pharmacol., vol. 24 Suppl D, pp. 5–12, 2002. pmid:12575463
  29. 29. Cebolla A. M., Palmero-Soler E., Dan B., and Cheron G., “Frontal phasic and oscillatory generators of the N30 somatosensory evoked potential,” NeuroImage, vol. 54, no. 2, pp. 1297–1306, Jan. 2011, pmid:20813188
  30. 30. Cebolla A.-M., Palmero-Soler E., Leroy A., and Cheron G., “EEG Spectral Generators Involved in Motor Imagery: A swLORETA Study,” Front. Psychol., vol. 8, p. 2133, 2017, pmid:29312028
  31. 31. Leroy A., et al., “EEG Dynamics and Neural Generators in Implicit Navigational Image Processing in Adults with ADHD,” Neuroscience, Jan. 2018, pmid:29343456
  32. 32. Lotte F., Congedo M., Lécuyer A., Lamarche F., and Arnaldi B., “A review of classification algorithms for EEG-based brain-computer interfaces,” J. Neural Eng., vol. 4, no. 2, pp. R1–R13, Jun. 2007, pmid:17409472
  33. 33. Rivet B., Souloumiac A., Attina V., and Gibert G., “xDAWN algorithm to enhance evoked potentials: application to brain-computer interface,” IEEE Trans. Biomed. Eng., vol. 56, no. 8, pp. 2035–2043, Aug. 2009, pmid:19174332
  34. 34. Barachant A., Bonnet S., Congedo M., and Jutten C., “Multiclass brain-computer interface classification by Riemannian geometry,” IEEE Trans. Biomed. Eng., vol. 59, no. 4, pp. 920–928, Apr. 2012, pmid:22010143
  35. 35. Cecotti H. and Ries A. J., “Best practice for single-trial detection of event-related potentials: Application to brain-computer interfaces,” Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol., vol. 111, pp. 156–169, 2017, pmid:27453051
  36. 36. Yger F., Berar M., and Lotte F., “Riemannian Approaches in Brain-Computer Interfaces: A Review,” IEEE Trans. Neural Syst. Rehabil. Eng. Publ. IEEE Eng. Med. Biol. Soc., vol. 25, no. 10, pp. 1753–1762, 2017, pmid:27845666
  37. 37. Wang J., Feng Z., Lu N., and Luo J., “Toward optimal feature and time segment selection by divergence method for EEG signals classification,” Comput. Biol. Med., vol. 97, pp. 161–170, 01 2018, pmid:29747059
  38. 38. Blum S., Jacobsen N. S. J., Bleichner M. G., and Debener S., “A Riemannian Modification of Artifact Subspace Reconstruction for EEG Artifact Handling,” Front. Hum. Neurosci., vol. 13, p. 141, 2019, pmid:31105543
  39. 39. Chevallier S., Kalunga E. K., Barthélemy Q., and Monacelli E., “Review of Riemannian Distances and Divergences, Applied to SSVEP-based BCI,” Neuroinformatics, Jun. 2020, pmid:32562187
  40. 40. Xu J., Grosse-Wentrup M., and Jayaram V., “Tangent space spatial filters for interpretable and efficient Riemannian classification,” J. Neural Eng., vol. 17, no. 2, p. 026043, May 2020, pmid:32224508
  41. 41. Zeng H. and Song A., “Optimizing Single-Trial EEG Classification by Stationary Matrix Logistic Regression in Brain–Computer Interface,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 11, pp. 2301–2313, Nov. 2016, pmid:26513804
  42. 42. Pernet C., Sajda P., and Rousselet G., “Single-Trial Analyses: Why Bother?,” Front. Psychol., vol. 2, p. 322, 2011, pmid:22073038
  43. 43. Lotte F., et al., “A review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update,” J. Neural Eng., vol. 15, no. 3, p. 031005, 2018, pmid:29488902
  44. 44. Congedo M., Rodrigues P. L. C., Bouchard F., Barachant A., and Jutten C., “A closed-form unsupervised geometry-aware dimensionality reduction method in the Riemannian Manifold of SPD matrices,” Conf. Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. IEEE Eng. Med. Biol. Soc. Annu. Conf., vol. 2017, pp. 3198–3201, 2017, pmid:29060578
  45. 45. Vidal M., Amorim M.-A., and Berthoz A., “Navigating in a virtual three-dimensional maze: how do egocentric and allocentric reference frames interact?,” Brain Res. Cogn. Brain Res., vol. 19, no. 3, pp. 244–258, May 2004, pmid:15062862
  46. 46. Barachant A, “MEG decoding using Riemannian geometry and unsupervised classification.” 2014.
  47. 47. Ledoit O. and Wolf M., “A well-conditioned estimator for large-dimensional covariance matrices,” J. Multivar. Anal., vol. 88, no. 2, pp. 365–411, 2004.
  48. 48. Chen Y., Wiesel A., Eldar Y. C., and Hero A. O., “Shrinkage Algorithms for MMSE Covariance Estimation,” IEEE Trans. Signal Process., vol. 58, no. 10, pp. 5016–5029, Oct. 2010,
  49. 49. Moakher M., “A Differential Geometric Approach to the Geometric Mean of Symmetric Positive-Definite Matrices,” SIAM J Matrix Anal. Appl., vol. 26, pp. 735–747, Jan. 2005,
  50. 50. Tuzel O., Porikli F., and Meer P., “Pedestrian Detection via Classification on Riemannian Manifolds,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 10, pp. 1713–1727, Oct. 2008, pmid:18703826
  51. 51. Barachant A., Bonnet S., Congedo M., and Jutten C., “Classification of covariance matrices using a Riemannian-based kernel for BCI applications,” Neurocomputing, vol. 112, pp. 172–178, Jul. 2013,
  52. 52. Gramfort A., et al., “MNE software for processing MEG and EEG data,” NeuroImage, vol. 86, pp. 446–460, Feb. 2014, pmid:24161808
  53. 53. Desikan R. S., et al., “An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest,” NeuroImage, vol. 31, no. 3, pp. 968–980, Jul. 2006, pmid:16530430
  54. 54. Potvin O., Dieumegarde L., Duchesne S., and Alzheimer’s Disease Neuroimaging Initiative, “Freesurfer cortical normative data for adults using Desikan-Killiany-Tourville and ex vivo protocols,” NeuroImage, vol. 156, pp. 43–64, 01 2017, pmid:28479474
  55. 55. Gramfort A., et al., “MEG and EEG data analysis with MNE-Python,” Front. Neurosci., vol. 7, 2013, pmid:24431986
  56. 56. Harris C. R., et al., “Array programming with NumPy,” Nature, vol. 585, no. 7825, Art. no. 7825, Sep. 2020, pmid:32939066
  57. 57. Jones E., Oliphant T., and Peterson P., “SciPy: Open Source Scientific Tools for Python,” Jan. 2001.
  58. 58. Pedregosa F., et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, Jan. 2012.
  59. 59. Koles Z. J., “The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG,” Electroencephalogr. Clin. Neurophysiol., vol. 79, no. 6, pp. 440–447, Dec. 1991, pmid:1721571
  60. 60. Blankertz B., Tomioka R., Lemm S., Kawanabe M., and Muller K., “Optimizing Spatial filters for Robust EEG Single-Trial Analysis,” IEEE Signal Process. Mag., vol. 25, no. 1, pp. 41–56, 2008,
  61. 61. Wolpaw J. R., et al., “Brain-computer interface technology: a review of the first international meeting,” IEEE Trans. Rehabil. Eng. Publ. IEEE Eng. Med. Biol. Soc., vol. 8, no. 2, pp. 164–173, Jun. 2000. pmid:10896178
  62. 62. Tomberg C. and Desmedt J. E., “A method for identifying short-latency human cognitive potentials in single trials by scalp mapping,” Neurosci. Lett., vol. 168, no. 1–2, pp. 123–125, Feb. 1994, pmid:8028763
  63. 63. Tomberg C. and Desmedt J. E., “Non-averaged human brain potentials in somatic attention: the short-latency cognition-related P40 component,” J. Physiol., vol. 496 (Pt 2), pp. 559–574, Oct. 1996, pmid:8910238
  64. 64. Coon W. G. and Schalk G., “A method to establish the spatiotemporal evolution of task-related cortical activity from electrocorticographic signals in single trials,” J. Neurosci. Methods, vol. 271, pp. 76–85, Sep. 2016, pmid:27427301
  65. 65. Rey H. G., Ahmadi M., and Quian Quiroga R., “Single trial analysis of field potentials in perception, learning and memory,” Curr. Opin. Neurobiol., vol. 31, pp. 148–155, Apr. 2015, pmid:25460071
  66. 66. Kalaganis F. P., Laskaris N. A., Chatzilari E., Nikolopoulos S., and Kompatsiaris I., “A Riemannian Geometry Approach to Reduced and Discriminative Covariance Estimation in Brain Computer Interfaces,” IEEE Trans. Biomed. Eng., vol. 67, no. 1, pp. 245–255, Jan. 2020, pmid:30998456
  67. 67. Stewart A. X., Nuthmann A., and Sanguinetti G., “Single-trial classification of EEG in a visual object task using ICA and machine learning,” J. Neurosci. Methods, vol. 228, pp. 1–14, May 2014, pmid:24613798
  68. 68. Guan S., Zhao K., and Yang S., “Motor Imagery EEG Classification Based on Decision Tree Framework and Riemannian Geometry,” Computational Intelligence and Neuroscience, Jan. 21, 2019. https://www.hindawi.com/journals/cin/2019/5627156/ (accessed Aug. 03, 2020). pmid:30804988
  69. 69. Majidov I. and Whangbo T., “Efficient Classification of Motor Imagery Electroencephalography Signals Using Deep Learning Methods,” Sensors, vol. 19, no. 7, Art. no. 7, Jan. 2019, pmid:30978978
  70. 70. L. Korczowski, M. Congedo, and C. Jutten, “Single-trial classification of multi-user P300-based Brain-Computer Interface using riemannian geometry,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Aug. 2015, pp. 1769–1772.
  71. 71. Navarro-Sune X., et al., “Riemannian Geometry Applied to Detection of Respiratory States From EEG Signals: The Basis for a Brain-Ventilator Interface,” IEEE Trans. Biomed. Eng., vol. 64, no. 5, pp. 1138–1148, 2017, pmid:28129143
  72. 72. Karch J. D., Sander M. C., von Oertzen T., Brandmaier A. M., and Werkle-Bergner M., “Using within-subject pattern classification to understand lifespan age differences in oscillatory mechanisms of working memory selection and maintenance,” NeuroImage, vol. 118, pp. 538–552, 2015, pmid:25929619
  73. 73. Leroy A. and Cheron G., “EEG dynamics and neural generators of psychological flow during one tightrope performance,” Sci. Rep., vol. 10, no. 1, Art. no. 1, Jul. 2020, pmid:31913322