Abstract
The traditional approach in neuroscience relies on encoding models where brain responses to different stimuli are related to the latter to establish reproducible dependencies. To reduce neuronal and experimental noise, brain signals are usually averaged across trials to detect reliable and coherent brain activity. However, neural representations of stimulus features can be spread over time, frequency, and space, motivating the use of alternative methods that relate stimulus features to brain responses. We propose a Coherence-based spectro-spatial filter method that reconstructs stimulus features from intracortical brain signals. The proposed method models trials of an experiment as realizations of a random process and extracts patterns that are common across brain signals and the presented stimuli. These patterns, originating from different recording sites, are then combined (spatial filtering) to form a final prediction. Our results from three different cognitive tasks (motor movements, speech perception and speech production), concur to show that the proposed method significantly improves the ability to predict stimulus features over traditional methods such as multilinear regression with distributed lags and artificial neural networks. Furthermore, analyses of the model parameters show anatomical discriminability for execution of different motor movements. This anatomical discriminability is also observed in the perception and production of different words. These features could be exploited in the design of neuroprosthesis, as well as for exploring normal brain functioning.