EuroSPIN.pdf (16.9 MB)
Processing structured symbolic sequences with Recurrent Neural Networks - presented @ EuroSPIN workshop 2013
The ability to encode, process and represent structured sequences of perceptual information as well as
the ability to finely sequence motor actions are ubiquitous features of human cognition, fundamental
to a variety of common, everyday tasks. Sequential learning provides a domain-general mechanism for
acquiring predictive relations between sequence elements abiding to a set of structural regularities,
upon which the brain can anticipate upcoming elements.
To account for the ability of neuronal circuits to process data with embedded temporal dependencies
(expressed as symbolic time series), recurrent neural network (RNN) models are naturally suitable by
virtue of their inherent recurrent connectivity (that allows context information to be kept in units’
activities), but also due to their biological plausibility.
In this work, we explore the properties and characteristics of different recurrent network models, built
according to the reservoir computing framework, involved in a series of different sequence processing
tasks, designed to assess their ability to acquire and learn temporal dependencies and statistics of the
input data. We assess their properties and performance as ‘predictive machines’ (relating it to the
capacity to learn the set of generative rules underlying different grammars), and explore their ability to
adequately capture and represent variable length temporal dependencies embedded in the input
sequences. We also compare models with varying degrees of biological realism, while exploring the
trade-off between abstraction and biological realism in this specific domain.