Elsevier

Current Opinion in Neurobiology

Volume 25, April 2014, Pages 156-163
Current Opinion in Neurobiology

Neural circuits as computational dynamical systems

https://doi.org/10.1016/j.conb.2014.01.008Get rights and content

Highlights

  • Many cortical circuits can be viewed as computational dynamical systems.

  • A new tool to help us understand cortical dynamics is the optimized recurrent neural network (RNN).

  • RNNs are useful because they have fundamental similarities to biological neural systems.

  • RNNs are optimized to perform tasks analogous to those given to subjects in experimental settings.

  • RNNs can generate novel ideas and hypotheses about the mechanisms of computation in biological neural circuits.

Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.

Introduction

Systems neuroscience is heading towards the simultaneous recording or imaging of many neurons, often while an animal engages in a complicated behavior [1, 2••, 3, 4••, 5, 63]. This trend has led to a wealth of data that requires new conceptual approaches, methods of analysis, and modeling techniques. For example, recorded neurons often display perplexing activity patterns [6, 7, 8••, 9••, 10] that are not easily explained in terms of tuning for sensory parameters or the variety of possible internal parameters that seem natural to an experimentalist. In particular, many studies of higher cortical areas have shown bewildering temporal dynamics at the single-cell and population levels [1, 2••, 3, 4••, 5, 6, 7, 8••] (Ames et al., unpublished data), often while the animal is exposed to constant or simple stimuli. As a result of these observations, many in the field are arriving at the conclusion that it is no longer enough to correlate the firing rates of individual neurons with experimental parameters. Instead, we should delve deeper and attempt to understand the dynamical mechanisms underlying the computations; or at least provide constraints on possible mechanisms [6, 7, 8••, 9••, 10, 11]. We require examples of the classes of dynamics that do (or do not) allow networks of neurons to perform useful computations.

Experimental work has begun applying dynamical systems approaches [12, 13], originally applied in neuroscience to single neurons, to the population responses of simultaneously and individually recorded neurons [1, 2••, 4••, 8••, 14, 15, 16, 17]. The dynamical systems approach explicitly describes neural population responses as time-varying trajectories in a high-dimensional state space and views the dynamics as acting to shape these trajectories. In this framework individual neurons work in concert to carry out computations. Much as the population vector requires the entire neural population to readout a signal, the dynamical systems approach implies that one must understand the population in order to understand the dynamics of a single neuron.

One model class that can accommodate high-dimensional, distributed and dynamical data is the optimized recurrent neural network (RNN) (Figure 1a). RNNs are a natural model class to study mechanisms in systems neuroscience because they are both dynamical and computational. The purpose of this article is to better introduce RNNs to the field of systems neuroscience. I review recent technical progress in understanding and optimizing RNNs, focusing on a recent result from prefrontal cortex.

Section snippets

Recurrent neural networks

Wilson and Cowan originally developed the recurrent network in a biological context to describe the average firing rates of groups of cells [18]. A more modern and general definition is given byτx˙(t)=x(t)+Jr(t)+Bu(t)+b,where the ith component, xi, of the vector x, can be viewed as the summed and filtered synaptic currents at the soma of a biological neuron. The continuous variable, ri, is the instantaneous “firing rate” and is a saturating nonlinear function of xi (Figure 1b). Thus, the RNN

Optimizing RNNs

In many modeling studies, a network model is designed by hand to reproduce and thus explain a set experimental findings (e.g. see [22]). Here, I will, in contrast, focus on modeling using RNNs that have been optimized, or “trained”. For example, assume one wanted to study integration. In the designed approach, a network would be explicitly constructed such that the network integrates an input. Specifically, the weights would be adjusted such that positive feedback internal to the network had a

Reverse engineering an RNN after optimization

Revealing the dynamical mechanisms employed by an RNN to solve a particular task involves a final step after optimization; one must reverse engineer the solution found by the RNN. Because the solution was not constructed by hand, without this step, one simply has another unintelligible network that solves the task of interest. Recently, Omri Barak and I demonstrated that RNNs could be understood by employing techniques from nonlinear dynamical systems theory [42••]. We reverse engineered a

A 3-bit memory

Understanding how memories can be represented in biological neural networks has long been studied in neuroscience. In this toy example we trained an RNN to generate the dynamics necessary to implement a 3-bit memory (Figure 2a). Three inputs enter the RNN and specify the states of the three bits individually. For example, a +1 input pulse enters the RNN through the first input line. The first output should then transition to +1 if it was at −1, or stay at +1 if it already held that value.

Context dependent decision making in prefrontal cortex

Animals are not limited to simple stimulus and response reflexes. They can rapidly and flexibly accommodate to context: as the context changes, the same stimuli can elicit dramatically different behaviors [44]. To study this type of contextually dependent decision making [8••], monkeys were trained to flexibly select and accumulate evidence from noisy visual stimuli in order to make a discrimination [45, 46, 47, 48, 49]. On the basis of a contextual cue, the monkeys either differentiated the

Conclusions

The study of neural dynamics at the circuit and systems level is an area of extremely active research. RNNs are a near ideal modeling framework for studying neural circuit dynamics because they share fundamental features with biological tissue, for example, feedback, nonlinearity, and parallel and distributed computing. Many challenges remain, however. Currently, RNNs do not take anatomy into account, other than the existence of feedback. Anatomical features such as columnar structure,

References and recommended reading

Papers of particular interest, published within the period of review, have been highlighted as:

  • • of special interest

  • •• of outstanding interest

Acknowledgements

I am grateful to Larry Abbott, Mark Churchland, Valerio Mante and Krishna Shenoy for helpful discussions and feedback. This work was supported in part by DARPA Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR) award (N66001-10-C-2010) and an NIH Director's Pioneer Award (1DP1OD006409).

References (63)

  • D.A. Crowe et al.

    Rapid sequences of population activity patterns dynamically encode task-critical spatial information in parietal cortex

    J Neurosci

    (2010)
  • C.D. Harvey et al.

    Choice-specific sequences in parietal cortex during a virtual-navigation decision task

    Nature

    (2012)
  • A. Ponce-Alvarez et al.

    Dynamics of cortical neuronal ensembles transit from decision making to storage for later report

    J Neurosci

    (2012)
  • J.K. Jun et al.

    Heterogeneous population coding of a short-term memory and decision task

    J Neurosci

    (2010)
  • M.M. Churchland et al.

    Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex

    J Neurophysiol

    (2007)
  • V. Mante et al.

    Context-dependent computation by recurrent dynamics in prefrontal cortex

    Nature

    (2013)
  • M. Rigotti et al.

    The importance of mixed selectivity in complex cognitive tasks

    Nature

    (2013)
  • J.E. Roy et al.

    Prefrontal cortex activity during flexible categorization

    J Neurosci

    (2010)
  • O. Barak et al.

    From fixed points to chaos: three models of delayed discrimination

    Prog Neurobiol

    (2013)
  • M. Rabinovich et al.

    Dynamical principles in neuroscience

    Rev Mod Phys

    (2006)
  • K.V. Shenoy et al.

    Cortical control of arm movements: a dynamical systems perspective

    Annu Rev Neurosci

    (2013)
  • B.M. Broome et al.

    Encoding and decoding of overlapping odor sequences

    Neuron

    (2006)
  • M. Stopfer et al.

    Intensity versus identity coding in an olfactory system

    Neuron

    (2003)
  • L.M. Jones et al.

    Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles

    Proc Natl Acad Sci U S A

    (2007)
  • K. Doya
  • J.J. Hopfield

    Neural networks and physical systems with emergent collective computational abilities [Internet]

    Proc Natl Acad U S A

    (1982)
  • D. Zipser et al.

    A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons

    Nature

    (1988)
  • E.E. Fetz

    Cortical mechanisms controlling limb movement

    Curr Opin Neurobiol

    (1993)
  • B.M. Yu et al.

    Extracting dynamical structure embedded in neural activity

    Neural Inform Process Syst (NIPS)

    (2006)
  • Y. Bengio et al.

    Learning long-term dependencies with gradient descent is difficult

    IEEE Trans Neural Netw

    (1994)
  • K. Rajan et al.

    Stimulus-dependent suppression of chaos in recurrent neural networks

    Phys Rev E

    (2010)
  • Cited by (190)

    View all citing articles on Scopus
    View full text