Skip to main content
Log in

Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Prefrontal cortical neurons play essential roles in performing rule-dependent tasks and working memory-based decision making. Motivated by PFC recordings of task-performing mice, we developed an excitatory–inhibitory spiking recurrent neural network (SRNN) to perform a rule-dependent two-alternative forced choice (2AFC) task. We imposed several important biological constraints onto the SRNN and adapted spike frequency adaptation (SFA) and SuperSpike gradient methods to train the SRNN efficiently. The trained SRNN produced emergent rule-specific tunings in single-unit representations, showing rule-dependent population dynamics that resembled experimentally observed data. Under various test conditions, we manipulated the SRNN parameters or configuration in computer simulations, and we investigated the impacts of rule-coding error, delay duration, recurrent weight connectivity and sparsity, and excitation/inhibition (E/I) balance on both task performance and neural representations. Overall, our modeling study provides a computational framework to understand neuronal representations at a fine timescale during working memory and cognitive control and provides new experimentally testable hypotheses in future experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data and Software Availability

All custom computer software files are available at https://github.com/Jakexxh/srnn-2afc.

References

  1. Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009;63(4):544–57.

    Article  Google Scholar 

  2. Mante V, Sussillo D, Shenoy K, Newsome WT. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. 2013;503:78–84.

    Article  Google Scholar 

  3. Sussilo D, Churchland MM, Kaufman MT, Shenoy KV. A neural network that finds a naturalistic solution for the production of muscle activity. Nat Neurosci. 2015;18(7):1025–33.

    Article  Google Scholar 

  4. Rajan K, Harvey CD, Tank DW. Recurrent network models of sequence generation and memory. Neuron. 2016;90:128–42.

    Article  Google Scholar 

  5. Bolkan SS, Stujenske JM, Parnaudeau S, Spellman TJ, Rauffenbart C, Abbas AI, Harris AZ, Gordon JA, Kellendonk C. Thalamic projections sustain prefrontal activity during working memory maintenance. Nat Neurosci. 2017;20:987–96.

    Article  Google Scholar 

  6. Goudar V, Buonomano DV. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks. eLife. 2018;7:e31134.

  7. Song HF, Yang GR, Wang XJ. Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PLoS Computational Biology. 2016;12(2):e1004792.

  8. Yang GR, Joglekar MR, Song HF, Newsome WT, Wang X-J. Task representations in neural networks trained to perform many cognitive tasks. Nat Neurosci. 2019;22:297–306.

    Article  Google Scholar 

  9. Zhang X, Liu S, Chen ZS. A geometric framework for understanding dynamic information integration in context-dependent computation. iScience. 2021;24:102919.

  10. Maass W. Networks of spiking neurons: the third generation of neural network models. Neural Netw. 1997;10(9):1659–71.

    Article  Google Scholar 

  11. Ponulak F, Kasinski A. Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification and spike-shifting. Neural Comput. 2010;22(2):467–510.

    Article  MathSciNet  MATH  Google Scholar 

  12. Sporea I, Grüning A. Supervised learning in multilayer spiking neural networks. Neural Comput. 2013;25(2):473–509.

    Article  MathSciNet  MATH  Google Scholar 

  13. Shrestha SB, Song Q. Adaptive learning rate of SpikeProp based on weight convergence analysis. Neural Netw. 2015;63:185–98.

    Article  MATH  Google Scholar 

  14. Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. Deep learning in spiking neural networks. Neural Netw. 2019;111:47–63.

    Article  Google Scholar 

  15. Panda P, Roy K. Learning to generate sequences with combination of Hebbian and non-Hebbian plasticity in recurrent spiking neural networks. Front Neurosci. 2017;11:693.

    Article  Google Scholar 

  16. Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nat Commun. 2017;8:2208.

    Article  Google Scholar 

  17. Zenke F, Ganguli S. Superspike: Supervised learning in multilayer spiking neural networks. Neural Comput. 2018;30(6):1514–41.

    Article  MathSciNet  MATH  Google Scholar 

  18. Neftci EO, Mostafa H, Zenke F. Surrogate gradient learning in spiking neural networks. IEEE Signal Process Mag. 2019;36:61–3.

    Article  Google Scholar 

  19. Schmitt LI, Wimmer RD, Nakajima M, Happ M, Mofakham S, Halassa MM. Thalamic amplification of cortical connectivity sustains attentional control. Nature. 2017;545:219–23.

    Article  Google Scholar 

  20. Rikhye RV, Gilra A, Halassa MM. Thalamic regulation of switching between cortical representations enables cognitive flexibility. Nat Neurosci. 2018;21:1753–63.

    Article  Google Scholar 

  21. Fujisawa S, Amarasingham A, Harrison MT, Buzsaki G. Behavior-dependent short-term assembly dynamics in the medial prefrontal cortex. Nat Neurosci. 2008;11:823–33.

    Article  Google Scholar 

  22. Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature. 2012;484:62–8.

    Article  Google Scholar 

  23. Hardy NF, Buonomano DV. Encoding time in feedforward trajectories of a recurrent neural network model. Neural Comput. 2018;30(2):378–96.

    Article  MathSciNet  MATH  Google Scholar 

  24. Rajakumar A, Rinzel J, Chen ZS. Stimulus-driven and spontaneous dynamics in excitatory-inhibitory recurrent neural networks for sequence representation. Neural Comput. 2021;33:2603–45.

    Article  MathSciNet  MATH  Google Scholar 

  25. Ingrosso A, Abbott LF. Training dynamically balanced excitatory-inhibitory networks. PLoS ONE. 2019;14(8):e0220547.

  26. Bellec G, Salaj D, Subramoney A, Legenstein R, Maass W. Long short-term memory and learning-to-learn in networks of spiking neurons. In Advances in Neural Information Processing Systems (NeurIPS’18). 2018.

  27. Bellec G, Scherr F, Subramoney A, Hajek E, Salaj D, Legenstein R, Maass W. A solution to the learning dilemma for recurrent networks of spiking neurons. Nat Commun. 2020;11:3625.

    Article  Google Scholar 

  28. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pages 249–256, 2010.

  29. Rajan K, Abbott LF. Eigenvalue spectra of random matrices for neural networks. Phys Rev Lett. 2006;97(18):188104.

  30. Okun M, Lampl I. Balance of excitation and inhibition Scholarpedia. 2009;4(8):7467.

    Google Scholar 

  31. Kingma DP, Ba J. Adam: A method for stochastic optimization. 2014. arXiv:1412.6980

  32. Orhan AE, Ma WJ. A diverse range of factors affect the nature of neural representations underlying short-term memory. Nat Neurosci. 2019;22(2):275–83.

    Article  Google Scholar 

  33. Kao JC. Considerations in using recurrent neural networks to probe neural dynamics. J Neurophysiol. 2019;122:2504–21.

    Article  Google Scholar 

  34. Linden H, Hagen E, Leski S, Norheim ES, Pettersen KH, Einevoll GT. LFPy: a tool for biophysical simulation of extracellular potentials generated by detailed model neurons. Front Neuroinform. 2013;7:4.

    Google Scholar 

  35. Mazzoni A, Linden H, Cuntz H, Lansner A, Panzeri S, Einevoll GT. Computing the local field potential (LFP) from integrate-and-fire network models. PLoS Comput Biol. 2015;11:e1004584.

  36. Miller EK, Lundqvist M, Bastos AM. Working memory 2.0. Neuron. 2018;100:463–475.

  37. Mukherjee A, Lam NH, Wimmer RD, Halassa MM. Thalamic circuits for independent control of prefrontal signal and noise. Nature. 2021;600:100–104.

  38. van der Maaten LJP, Hinton GE. Visualizing data using t-SNE. J Mach Learn Res. 2008;9:2579–605.

    MATH  Google Scholar 

  39. Stokes MG, Kusunoki M, Signal N, Nili H, Gaffan D, Duncan J. Dynamic coding for cognitive control in prefrontal cortex. Neuron. 2013;78(2):364–75.

    Article  Google Scholar 

  40. Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking neural network that discretizes time. PLoS Comput Biol. 2020;16(1):e1007606.

  41. Li Y, Lim R, Sejnowski TJ. Learning the synaptic and intrinsic membrane dynamics underlying working memory in spiking neural network models. Neural Comput. 2021;33:3264–87.

    Article  MathSciNet  MATH  Google Scholar 

  42. Kim R, Sejnowski TJ. Strong inhibitory signaling underlies stable temporal dynamics and working memory in spiking neural networks. Nat Neurosci. 2021;24:129–39.

    Article  Google Scholar 

  43. Maheswaranathan N, Williams A, Golub MD, Ganguli S, Sussillo D. Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics. In Advances in Neural Information Processing Systems (NeurIPS’19), 2019.

  44. Lundqvist M, Rose J, Herman P, Brincat SL, Buschman TJ, Miller EK. Gamma and beta bursts underlie working memory. Neuron. 2016;90:152–64.

    Article  Google Scholar 

  45. Lundqvist M, Herman P, Miller EK. Working memory: delay activity, yes! persistent activity? maybe not. J Neurosci. 2018;38:7013–9.

    Article  Google Scholar 

  46. Zenke F, Poole B, Ganguli S. Continual learning through synaptic intelligence. In Proceedings of International Conference on Machine Learning (ICML), pages 3978–3995, 2017.

  47. Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, Hassabis D, Clopath C, Kumaran D, Hadsell R. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences USA. 2017;114(13):3521–6.

    Article  MathSciNet  MATH  Google Scholar 

  48. Shenoy KV, Kao JC. Measurement, manipulation and modeling of brain-wide neural population dynamics. Nat Commun. 2021;12:633.

    Article  Google Scholar 

  49. Marton T, Seifikar H, Luongo FJ, Lee AT, Sohal VS. Roles of prefrontal cortex and mediodorsal thalamus in take engagement and behavioral flexibility. J Neurosci. 2018;38:2569–78.

    Article  Google Scholar 

  50. Lillicrap TP, Stantoro A, Marris L, Akerman CJ, Hinton GE. Backpropagation and the brain. Nat Rev Neurosci. 2020;21:335–46.

    Article  Google Scholar 

  51. Song S, Abbott LF. Cortical development and remapping through spike timing-dependent plasticity. Neuron. 2001;32:339–50.

    Article  Google Scholar 

  52. Fiete IR, Senn W, Wang CZH, Hahnloser RHR. Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity. Neuron. 2010;65:563–76.

    Article  Google Scholar 

  53. Rezende DJ, Gerstner W. Stochastic variational learning in recurrent spiking networks. Front Comput Neurosci. 2014;8:38.

    Google Scholar 

  54. Lee C, Panda P, Srinivasan G, Roy K. Training deep spiking convolutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning. Front Neurosci. 2018;12:435.

    Article  Google Scholar 

  55. Lobov SA, Mikhaylov AN, Shamshin M, Makarov VA, Kazantsev VB. Spatial properties of STDP in a self-learning spiking neural network enable controlling a mobile robot. Front Neurosci. 2020;14:88.

    Article  Google Scholar 

  56. Hao Y, Huang X, Dong M, Xu B. A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule. Neural Netw. 2020;121:387–95.

    Article  Google Scholar 

  57. Bush D, Philippides A, Husbands P, O’Shea M. Dual coding with STDP in a spiking recurrent neural network model of the hippocampus. PLoS Comput Biol. 2010;6(7):e1000839.

  58. Gillett M, Pereira U, Brunel N. Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. Proceedings of the National Academy of Sciences, USA. 2020;117(47):29948–58.

    Article  Google Scholar 

  59. Kim R, Li Y, Sejnowski TJ. Simple framework for constructing functional spiking recurrent neural networks. Proceedings of the National Academy of Sciences, USA. 2019;116:22811–20.

    Article  Google Scholar 

  60. Kim CM, Chow CC. Training spiking neural networks in the strongly coupled regime. Neural Comput. 2021;33:1199–233.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank John Rinzel for valuable feedback on this work, the current and former members of the Chen laboratory for helpful discussions. We thank Jonathan Gould and Robert MacKay for English proofreading. This work was partially supported by the NSF-CBET grant 1835000 (ZSC) and R01-MH118928 (ZSC). Cloud computing resources are partially supported by the Research Award provided by Oracle for Research. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

ZSC and XX conceived and designed the experiments. ZSC supervised the project. RDW and MMH provided and collected experimental data. XX performed the computer experiments. XX and ZSC analyzed the data, contributed to the software and wrote the paper.

Corresponding author

Correspondence to Zhe Sage Chen.

Ethics declarations

Declaration of Interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, X., Wimmer, R.D., Halassa, M.M. et al. Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation. Cogn Comput 15, 1167–1189 (2023). https://doi.org/10.1007/s12559-022-09994-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-022-09994-2

Keywords

Navigation