Neural circuits as computational dynamical systems
Introduction
Systems neuroscience is heading towards the simultaneous recording or imaging of many neurons, often while an animal engages in a complicated behavior [1, 2••, 3, 4••, 5, 63]. This trend has led to a wealth of data that requires new conceptual approaches, methods of analysis, and modeling techniques. For example, recorded neurons often display perplexing activity patterns [6, 7, 8••, 9••, 10] that are not easily explained in terms of tuning for sensory parameters or the variety of possible internal parameters that seem natural to an experimentalist. In particular, many studies of higher cortical areas have shown bewildering temporal dynamics at the single-cell and population levels [1, 2••, 3, 4••, 5, 6, 7, 8••] (Ames et al., unpublished data), often while the animal is exposed to constant or simple stimuli. As a result of these observations, many in the field are arriving at the conclusion that it is no longer enough to correlate the firing rates of individual neurons with experimental parameters. Instead, we should delve deeper and attempt to understand the dynamical mechanisms underlying the computations; or at least provide constraints on possible mechanisms [6, 7, 8••, 9••, 10, 11]. We require examples of the classes of dynamics that do (or do not) allow networks of neurons to perform useful computations.
Experimental work has begun applying dynamical systems approaches [12, 13], originally applied in neuroscience to single neurons, to the population responses of simultaneously and individually recorded neurons [1, 2••, 4••, 8••, 14, 15, 16, 17]. The dynamical systems approach explicitly describes neural population responses as time-varying trajectories in a high-dimensional state space and views the dynamics as acting to shape these trajectories. In this framework individual neurons work in concert to carry out computations. Much as the population vector requires the entire neural population to readout a signal, the dynamical systems approach implies that one must understand the population in order to understand the dynamics of a single neuron.
One model class that can accommodate high-dimensional, distributed and dynamical data is the optimized recurrent neural network (RNN) (Figure 1a). RNNs are a natural model class to study mechanisms in systems neuroscience because they are both dynamical and computational. The purpose of this article is to better introduce RNNs to the field of systems neuroscience. I review recent technical progress in understanding and optimizing RNNs, focusing on a recent result from prefrontal cortex.
Section snippets
Recurrent neural networks
Wilson and Cowan originally developed the recurrent network in a biological context to describe the average firing rates of groups of cells [18]. A more modern and general definition is given bywhere the ith component, xi, of the vector x, can be viewed as the summed and filtered synaptic currents at the soma of a biological neuron. The continuous variable, ri, is the instantaneous “firing rate” and is a saturating nonlinear function of xi (Figure 1b). Thus, the RNN
Optimizing RNNs
In many modeling studies, a network model is designed by hand to reproduce and thus explain a set experimental findings (e.g. see [22]). Here, I will, in contrast, focus on modeling using RNNs that have been optimized, or “trained”. For example, assume one wanted to study integration. In the designed approach, a network would be explicitly constructed such that the network integrates an input. Specifically, the weights would be adjusted such that positive feedback internal to the network had a
Reverse engineering an RNN after optimization
Revealing the dynamical mechanisms employed by an RNN to solve a particular task involves a final step after optimization; one must reverse engineer the solution found by the RNN. Because the solution was not constructed by hand, without this step, one simply has another unintelligible network that solves the task of interest. Recently, Omri Barak and I demonstrated that RNNs could be understood by employing techniques from nonlinear dynamical systems theory [42••]. We reverse engineered a
A 3-bit memory
Understanding how memories can be represented in biological neural networks has long been studied in neuroscience. In this toy example we trained an RNN to generate the dynamics necessary to implement a 3-bit memory (Figure 2a). Three inputs enter the RNN and specify the states of the three bits individually. For example, a +1 input pulse enters the RNN through the first input line. The first output should then transition to +1 if it was at −1, or stay at +1 if it already held that value.
Context dependent decision making in prefrontal cortex
Animals are not limited to simple stimulus and response reflexes. They can rapidly and flexibly accommodate to context: as the context changes, the same stimuli can elicit dramatically different behaviors [44]. To study this type of contextually dependent decision making [8••], monkeys were trained to flexibly select and accumulate evidence from noisy visual stimuli in order to make a discrimination [45, 46, 47, 48, 49]. On the basis of a contextual cue, the monkeys either differentiated the
Conclusions
The study of neural dynamics at the circuit and systems level is an area of extremely active research. RNNs are a near ideal modeling framework for studying neural circuit dynamics because they share fundamental features with biological tissue, for example, feedback, nonlinearity, and parallel and distributed computing. Many challenges remain, however. Currently, RNNs do not take anatomy into account, other than the existence of feedback. Anatomical features such as columnar structure,
References and recommended reading
Papers of particular interest, published within the period of review, have been highlighted as:
• of special interest
•• of outstanding interest
Acknowledgements
I am grateful to Larry Abbott, Mark Churchland, Valerio Mante and Krishna Shenoy for helpful discussions and feedback. This work was supported in part by DARPA Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR) award (N66001-10-C-2010) and an NIH Director's Pioneer Award (1DP1OD006409).
References (63)
- et al.
Flexible control of mutual inhibition: a neural model of two-interval discrimination
Science
(2005) - et al.
Excitatory and inhibitory interactions in localized populations of model neurons
Biophys J
(1972) - et al.
Chaos in random neural networks
Phys Rev Lett
(1988) Neural dynamics and circuit mechanisms of decision-making
Curr Opin Neurobiol
(2012)Backpropagation through time: what it does and how to do it
Proc IEEE
(1990)- et al.
Long short-term memory
Neural Comput
(1997) - et al.
Real-time computing without stable states: a new framework for neural computation based on perturbations
Neural Comput
(2002) - et al.
Echo state property linked to an input: exploring a fundamental characteristic of recurrent neural networks
Neural Comput
(2013) - et al.
Brain-wide neuronal dynamics during motor adaptation in zebrafish
Nature
(2012) - et al.
Neural population dynamics during reaching
Nature
(2012)
Rapid sequences of population activity patterns dynamically encode task-critical spatial information in parietal cortex
J Neurosci
Choice-specific sequences in parietal cortex during a virtual-navigation decision task
Nature
Dynamics of cortical neuronal ensembles transit from decision making to storage for later report
J Neurosci
Heterogeneous population coding of a short-term memory and decision task
J Neurosci
Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex
J Neurophysiol
Context-dependent computation by recurrent dynamics in prefrontal cortex
Nature
The importance of mixed selectivity in complex cognitive tasks
Nature
Prefrontal cortex activity during flexible categorization
J Neurosci
From fixed points to chaos: three models of delayed discrimination
Prog Neurobiol
Dynamical principles in neuroscience
Rev Mod Phys
Cortical control of arm movements: a dynamical systems perspective
Annu Rev Neurosci
Encoding and decoding of overlapping odor sequences
Neuron
Intensity versus identity coding in an olfactory system
Neuron
Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles
Proc Natl Acad Sci U S A
Neural networks and physical systems with emergent collective computational abilities [Internet]
Proc Natl Acad U S A
A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons
Nature
Cortical mechanisms controlling limb movement
Curr Opin Neurobiol
Extracting dynamical structure embedded in neural activity
Neural Inform Process Syst (NIPS)
Learning long-term dependencies with gradient descent is difficult
IEEE Trans Neural Netw
Stimulus-dependent suppression of chaos in recurrent neural networks
Phys Rev E
Cited by (190)
Neural networks: Explaining animal behavior with prior knowledge of the world
2023, Current BiologyRecurrent networks endowed with structural priors explain suboptimal animal behavior
2023, Current BiologyNeurobiologically realistic neural network enables cross-scale modeling of neural dynamics
2024, Scientific Reports