Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Dynamical flexible inference of nonlinear latent factors and structures in neural population activity

Abstract

Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named ‘DFINE’ (for ‘dynamical flexible inference for nonlinear embeddings’) achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: DFINE graphical model and its flexible inference method.
Fig. 2: Experimental tasks.
Fig. 3: DFINE can learn the dynamics on nonlinear manifolds and enables flexible inference in the presence of missing observations in simulated datasets.
Fig. 4: In the saccade dataset, DFINE outperformed benchmark methods in behaviour and neural prediction accuracy, and more robustly extracted the ring-like manifold in single trials.
Fig. 5: In the motor datasets, DFINE outperformed benchmark methods in behaviour and neural prediction accuracy and in robustly extracting the predictive ring-like manifold.
Fig. 6: Supervised DFINE extracts latent factors that are more behaviour predictive.
Fig. 7: DFINE can perform both causal and non-causal inference with missing observations and do so more accurately through non-causal inference.

Similar content being viewed by others

Data availability

The main data supporting the results in this study are available within the paper and its Supplementary Information. Two of the datasets used for this work are publicly available from the Miller and Sabes labs at the following links: dataset 3 at https://doi.org/10.6080/K0FT8J72 and dataset 4 at https://doi.org/10.5281/zenodo.3854034. The other two datasets used to support the results are too large to be publicly shared, yet they are available for research purposes from the corresponding author on reasonable request.

Code availability

The custom computer code of DFINE is available at https://github.com/ShanechiLab/torchDFINE.

References

  1. Churchland, M. M. et al. Neural population dynamics during reaching. Nature 487, 51–56 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Sadtler, P. T. et al. Neural constraints on learning. Nature 512, 423–426 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Kao, J. C. et al. Single-trial dynamics of motor cortex and their applications to brain–machine interfaces. Nat. Commun. 6, 7759 (2015).

    Article  CAS  PubMed  Google Scholar 

  4. Gallego, J. A. et al. Cortical population activity within a preserved neural manifold underlies multiple motor behaviors. Nat. Commun. 9, 4233 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  5. Pandarinath, C. et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nat. Methods 15, 805–815 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Remington, E. D., Narain, D., Hosseini, E. A., Correspondence, J. & Jazayeri, M. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron 98, 1005–1019 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A. & Fiete, I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22, 1512–1520 (2019).

    Article  CAS  PubMed  Google Scholar 

  8. Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019).

    Article  CAS  Google Scholar 

  9. Stavisky, S. D. et al. Neural ensemble dynamics in dorsal motor cortex during speech in people with paralysis. eLife 8, e46015 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  10. Susilaradeya, D. et al. Extrinsic and intrinsic dynamics in movement intermittency. eLife 8, e40145 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  11. Russo, A. A. et al. Neural trajectories in the supplementary motor area and motor cortex exhibit distinct geometries, compatible with different classes of computation. Neuron https://doi.org/10.1016/j.neuron.2020.05.020 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  12. Abbaspourazad, H., Choudhury, M., Wong, Y. T., Pesaran, B. & Shanechi, M. M. Multiscale low-dimensional motor cortical state dynamics predict naturalistic reach-and-grasp behavior. Nat. Commun. 12, 607 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Sani, O. G., Abbaspourazad, H., Wong, Y. T., Pesaran, B. & Shanechi, M. M. Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nat. Neurosci. 24, 140–149 (2021).

    Article  CAS  PubMed  Google Scholar 

  14. Hurwitz, C. et al. Targeted neural dynamical modeling. Adv. Neural Inf. Process. Syst. 34, 29379–29392 (2021).

  15. Bondanelli, G., Deneux, T., Bathellier, B. & Ostojic, S. Network dynamics underlying OFF responses in the auditory cortex. eLife 10, e53151 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Gardner, R. J. et al. Toroidal topology of population activity in grid cells. Nature https://doi.org/10.1038/s41586-021-04268-7 (2022)

  17. Shanechi, M. M. Brain–machine interfaces from motor to mood. Nat. Neurosci. 22, 1554–1564 (2019).

    Article  CAS  PubMed  Google Scholar 

  18. Vyas, S., Golub, M. D., Sussillo, D. & Shenoy, K. V. Computation through neural population dynamics. Annu. Rev. Neurosci. 43, 249–275 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  19. Jazayeri, M. & Ostojic, S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Curr. Opin. Neurobiol. 70, 113–120 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Churchland, M. M. & Shenoy, K. V. Temporal complexity and heterogeneity of single-neuron activity in premotor and motor cortex. J. Neurophysiol. 97, 4235–4257 (2007).

    Article  PubMed  Google Scholar 

  21. Cunningham, J. P. & Yu, B. M. Dimensionality reduction for large-scale neural recordings. Nat. Neurosci. 17, 1500–1509 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  22. Yang, Y. et al. Modelling and prediction of the dynamic responses of large-scale brain networks during direct electrical stimulation. Nat. Biomed. Eng. 5, 324–345 (2021).

    Article  PubMed  Google Scholar 

  23. Pandarinath, C. et al. Latent factors and dynamics in motor cortex and their application to brain–machine interfaces. J. Neurosci. 38, 9390–9401 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Berger, M., Agha, N. S. & Gail, A. Wireless recording from unrestrained monkeys reveals motor goal encoding beyond immediate reach in frontoparietal cortex. eLife 9, e51322 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Dastin-van Rijn, E. M., Provenza, N. R., Harrison, M. T. & Borton, D. A. How do packet losses affect measures of averaged neural signalsƒ. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2021, 941–944 (2021).

    PubMed  Google Scholar 

  26. Dastin-van Rijn, E. M. et al. PELP: accounting for missing data in neural time series by periodic estimation of lost packets. Front. Hum. Neurosci. 16, 934063 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  27. Gilron, R. et al. Long-term wireless streaming of neural recordings for circuit discovery and adaptive stimulation in individuals with Parkinson’s disease. Nat. Biotechnol. 39, 1078–1085 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Mazzenga, F., Cassioli, D., Loreti, P. & Vatalaro, F. Evaluation of packet loss probability in Bluetooth networks. In Proc. IEEE International Conference on Communications 313–317 (IEEE, 2002).

  29. Sellers, K. K. et al. Analysis-rcs-data: open-source toolbox for the ingestion, time-alignment, and visualization of sense and stimulation data from the medtronic summit RC+S system. Front. Hum. Neurosci. 15, 714256 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Simeral, J. D. et al. Home use of a percutaneous wireless intracortical brain–computer interface by individuals with tetraplegia. IEEE Trans. Biomed. Eng. 68, 2313–2325 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  31. Tsimbalo, E. et al. Mitigating packet loss in connectionless Bluetooth Low Energy. In IEEE 2nd World Forum on Internet of Things (WF-IoT) 291–296 (IEEE, 2015).

  32. Siddiqi, S. H., Kording, K. P., Parvizi, J. & Fox, M. D. Causal mapping of human brain function. Nat. Rev. Neurosci. 23, 361–375 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  33. Grosenick, L., Marshel, J. H. & Deisseroth, K. Closed-loop and activity-guided optogenetic control. Neuron 86, 106–139 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  34. Peixoto, D. et al. Decoding and perturbing decision states in real time. Nature 591, 604–609 (2021).

    Article  CAS  PubMed  Google Scholar 

  35. Bazaka, K. & Jacob, M. V. Implantable devices: issues and challenges. Electronics 2, 1–34 (2013).

    Article  Google Scholar 

  36. Even-Chen, N. et al. Power-saving design opportunities for wireless intracortical brain–computer interfaces. Nat. Biomed. Eng. 4, 984–996 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  37. Homer, M. L., Nurmikko, A. V., Donoghue, J. P. & Hochberg, L. R. Sensors and decoding for intracortical brain computer interfaces. Annu. Rev. Biomed. Eng. 15, 383–405 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Lebedev, M. A. & Nicolelis, M. A. L. Brain-machine interfaces: from basic science to neuroprostheses and neurorehabilitation. Physiol. Rev. 97, 767–837 (2017).

    Article  PubMed  Google Scholar 

  39. Schwarz, D. A. et al. Chronic, wireless recordings of large-scale brain activity in freely moving rhesus monkeys. Nat. Methods 11, 670–676 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Stanslaski, S. et al. A chronically implantable neural coprocessor for investigating the treatment of neurological disorders. IEEE Trans. Biomed. Circuits Syst. 12, 1230–1245 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  41. Topalovic, U. et al. Wireless programmable recording and stimulation of deep brain activity in freely moving humans. Neuron 108, 322–334.e9 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Yin, M. et al. Wireless neurosensor for full-spectrum electrophysiology recordings during free behavior. Neuron 84, 1170–1182 (2014).

    Article  CAS  PubMed  Google Scholar 

  43. Sani, O. G. et al. Mood variations decoded from multi-site intracranial human brain activity. Nat. Biotechnol. 36, 954 (2018).

    Article  CAS  PubMed  Google Scholar 

  44. Buesing, L., Macke, J. H. & Sahani, M. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. Adv. Neural Inf. Process. Syst. 25, 1682–1690 (2012).

  45. Macke, J. H. et al. Empirical models of spiking in neuronal populations. Adv. Neural Inf. Process. Syst. 24, 1–9 (2011).

    Google Scholar 

  46. Aghagolzadeh, M. & Truccolo, W. Inference and decoding of motor cortex low-dimensional dynamics via latent state-space models. IEEE Trans. Neural Syst. Rehabil. Eng. 24, 272–282 (2016).

    Article  PubMed  Google Scholar 

  47. Smith, A. C. & Brown, E. N. Estimating a state–space model from point process observations. Neural Comput. 15, 965–991 (2003).

    Article  PubMed  Google Scholar 

  48. Åström, K. J. Introduction to Stochastic Control Theory (Courier Corporation, 2012).

  49. Ye, J. & Pandarinath, C. Representation learning for neural population activity with neural data transformers. Neurons Behav. Data Anal. Theory 5, 1–18 (2021).

    Google Scholar 

  50. Gao, Y., Archer, E. W., Paninski, L. & Cunningham, J. P. Linear dynamical neural population models through nonlinear embeddings. Adv. Neural Inf. Process. Syst. 29, 163–171 (2016).

  51. She, Q. & Wu, A. Neural dynamics discovery via Gaussian process recurrent neural networks. In Proc. 35th Uncertainty in Artificial Intelligence Conference (eds Adams, R. P. & Gogate, V.) 454–464 (PMLR, 2020).

  52. Kim, T. D., Luo, T. Z., Pillow, J. W. & Brody, C. Inferring latent dynamics underlying neural population activity via neural differential equations. In Proc. 38th International Conference on Machine Learning (eds Meila, M. & Zhang, T.) 5551–5561 (PMLR, 2021).

  53. Zhu, F. et al. Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time. Adv. Neural Inf. Process. Syst. 34, 2331–2345 (2021).

  54. Lipton, Z. C., Kale, D. & Wetzel, R. Directly modeling missing data in sequences with RNNs: improved classification of clinical time series. In Proc. 1st Machine Learning for Healthcare Conference (eds Doshi-Velez, F. et al) 253–270 (PMLR, 2016).

  55. Che, Z., Purushotham, S., Cho, K., Sontag, D. & Liu, Y. Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 8, 6085 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  56. Ghazi, M. M. et al. Robust training of recurrent neural networks to handle missing data for disease progression modeling. Preprint at https://arxiv.org/abs/1808.05500 (2018).

  57. Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M. & Shenoy, K. V. High-performance brain-to-text communication via handwriting. Nature 593, 249–254 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  58. Glaser, J. I. et al. Machine learning for neural decoding. eNeuro 7, ENEURO.0506-19.2020 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  59. Sani, O. G., Pesaran, B. & Shanechi, M. M. Where is all the nonlinearity: flexible nonlinear modeling of behaviorally relevant neural dynamics using recurrent neural networks. Preprint at bioRxiv https://doi.org/10.1101/2021.09.03.458628 (2021).

  60. Hornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989).

    Article  Google Scholar 

  61. Murphy, K. P. Probabilistic Machine Learning: An Introduction (MIT Press, 2022).

  62. Carlsson, G. Topology and data. Bull. Am. Math. Soc. 46, 255–308 (2009).

    Article  Google Scholar 

  63. Lawlor, P. N., Perich, M. G., Miller, L. E. & Kording, K. P. Linear-nonlinear-time-warp-poisson models of neural activity. J. Comput. Neurosci. 45, 173–191 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  64. Perich, M. G., Lawlor, P. N., Kording, K. P. & Miller, L. E. Extracellular neural recordings from macaque primary and dorsal premotor motor cortex during a sequential reaching task. CRCNS https://doi.org/10.6080/K0FT8J72 (2018).

  65. Makin, J. G., O’Doherty, J. E., Cardoso, M. M. B. & Sabes, P. N. Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. J. Neural Eng. 15, 026010 (2018).

    Article  PubMed  Google Scholar 

  66. O’Doherty, J. E., Cardoso, M. M. B., Makin, J. G. & Sabes, P. N. Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology. Zenodo https://doi.org/10.5281/zenodo.3854034 (2020).

  67. Markowitz, D. A., Curtis, C. E. & Pesaran, B. Multiple component networks support working memory in prefrontal cortex. Proc. Natl Acad. Sci. USA 112, 11084–11089 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  68. Pesaran, B., Pezaris, J. S., Sahani, M., Mitra, P. P. & Andersen, R. A. Temporal structure in neuronal activity during working memory in macaque parietal cortex. Nat. Neurosci. 5, 805–811 (2002).

    Article  CAS  PubMed  Google Scholar 

  69. Saxena, S., Russo, A. A., Cunningham, J. & Churchland, M. M. Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity. eLife 11, e67620 (2022).

  70. Ljung, L. System Identification (Springer, 1998).

  71. Gallego, J. A., Perich, M. G., Chowdhury, R. H., Solla, S. A. & Miller, L. E. Long-term stability of cortical population dynamics underlying consistent behavior. Nat. Neurosci. 23, 260–270 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  72. Low, R. J., Lewallen, S., Aronov, D., Nevers, R. & Tank, D. W. Probing variability in a cognitive map using manifold inference from neural dynamics. Preprint at bioRxiv https://doi.org/10.1101/418939 (2018).

  73. Kobak, D. et al. Demixed principal component analysis of neural population data. eLife 5, e10989 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  74. Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  75. Sussillo, D., Churchland, M. M., Kaufman, M. T. & Shenoy, K. V. A neural network that finds a naturalistic solution for the production of muscle activity. Nat. Neurosci. 18, 1025–1033 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  76. Russo, A. A. et al. Motor cortex embeds muscle-like commands in an untangled population response. Neuron 97, 953–966.e8 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  77. Bashivan, P., Kar, K. & DiCarlo, J. J. Neural population control via deep image synthesis. Science 364, eaav9436 (2019).

    Article  CAS  PubMed  Google Scholar 

  78. Michaels, J. A., Schaffelhofer, S., Agudelo-Toro, A. & Scherberger, H. A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping. Proc. Natl Acad. Sci. USA 117, 32124–32135 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  79. Perich, M. G. et al. Inferring brain-wide interactions using data-constrained recurrent neural network models. Preprint at bioRxiv https://doi.org/10.1101/2020.12.18.423348 (2020).

  80. Makin, J. G., Moses, D. A. & Chang, E. F. Machine translation of cortical activity to text with an encoder–decoder framework. Nat. Neurosci. https://doi.org/10.1038/s41593-020-0608-8 (2020)

  81. Kim, M.-K., Sohn, J.-W. & Kim, S.-P. Decoding kinematic information from primary motor cortex ensemble activities using a deep canonical correlation analysis. Front. Neurosci. 14, 1083 (2020).

    Article  Google Scholar 

  82. Liu, R. et al. Drop, swap, and generate: a self-supervised approach for generating neural activity. Adv. Neural Inf. Process. Syst. 34, 10587–10599 (2021).

  83. Walker, E. Y. et al. Inception loops discover what excites neurons most using deep predictive models. Nat. Neurosci. https://doi.org/10.1038/s41593-019-0517-x (2019).

    Article  PubMed  Google Scholar 

  84. Bowman, S. R. et al. Generating sentences from a continuous space. In Proc. 20th SIGNLL Conference on Computational Natural Language Learning 10–21 (Association for Computational Linguistics, 2016).

  85. Roberts, A., Engel, J., Raffel, C., Hawthorne, C. & Eck, D. A hierarchical latent vector model for learning long-term structure in music. In Proc. 35th International Conference on Machine Learning (eds Dy, J. & Krause, A.) 4364–4373 (PMLR, 2018).

  86. Shen, D. et al. Towards generating long and coherent text with multi-level latent variable models. In Proc. 57th Annual Meeting of the Association for Computational Linguistics (eds Korhonen, A. et al.) 2079–2089 (Association for Computational Linguistics, 2019).

  87. Kalman, R. E. A new approach to linear filtering and prediction problems. J. Fluids Eng. 82, 35–45 (1960).

    Google Scholar 

  88. Finn, C., Goodfellow, I. & Levine, S. Unsupervised learning for physical interaction through video prediction. Adv. Neural Inf. Process. Syst. 29, 64–72 (2016).

  89. Fraccaro, M., Kamronn, S., Paquet, U. & Winther, O. A disentangled recognition and nonlinear dynamics model for unsupervised learning. Adv. Neural Inf. Process. Syst. 30, 3601–3610 (2017).

  90. Oh, J., Guo, X., Lee, H., Lewis, R. L. & Singh, S. Action-conditional video prediction using deep networks in Atari games. Adv. Neural Inf. Process. Syst. 28, 2863–2871 (2015).

  91. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (eds Burstein, J. et al.) 4171–4186 (Association for Computational Linguistics, 2019).

  92. Lewis, M. et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 7871–7880 (Association for Computational Linguistics, 2020).

  93. Dosovitskiy, A. & Brox, T. Generating images with perceptual similarity metrics based on deep networks. Adv. Neural Inf. Process. Syst. 29, 658–666 (2016).

  94. Razavi, A., van den Oord, A., Poole, B. & Vinyals, O. Preventing posterior collapse with delta-VAEs. In International Conference on Learning Representations https://openreview.net/forum?id=BJe0Gn0cY7 (2019).

  95. Zhao, S., Song, J. & Ermon, S. Towards deeper understanding of variational autoencoding models. Preprint at https://arxiv.org/abs/1702.08658 (2017).

  96. Kidziński, Ł. et al. Deep neural networks enable quantitative movement analysis using single-camera videos. Nat. Commun. 11, 4054 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  97. Yu, B. et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102, 614–635 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  98. Rutten, V., Bernacchia, A., Sahani, M. & Hennequin, G. Non-reversible Gaussian processes for identifying latent dynamical structure in neural data. Adv. Neural Inf. Process. Syst. 33, 9622–9632 (2020).

  99. Wu, A., Roy, N. A., Keeley, S. & Pillow, J. W. Gaussian process based nonlinear latent structure discovery in multivariate spike train data. Adv. Neural Inf. Process. Syst. 30, 3496–3505 (2017).

  100. Zhao, Y. & Park, I. M. Variational latent Gaussian process for recovering single-trial dynamics from population spike trains. Neural Comput. 29, 1293–1316 (2017).

    Article  PubMed  Google Scholar 

  101. Petreska, B. et al. Dynamical segmentation of single trials from population neural data. Adv. Neural Inf. Process. Syst. 24, 756–764 (2011).

  102. Linderman, S. et al. Bayesian learning and inference in recurrent switching linear dynamical systems. In Proc. 20th International Conference on Artificial Intelligence and Statistics (eds Singh, A. & Zhu, J.) 914–922 (PMLR, 2017).

  103. Song, C. Y., Hsieh, H.-L., Pesaran, B. & Shanechi, M. M. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. J. Neural Eng. 19, 066019 (2022).

    Article  Google Scholar 

  104. Chang, E. F. Towards large-scale, human-based, mesoscopic neurotechnologies. Neuron 86, 68–78 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  105. Priori, A., Foffani, G., Rossi, L. & Marceglia, S. Adaptive deep brain stimulation (aDBS) controlled by local field potential oscillations. Exp. Neurol. 245, 77–86 (2013).

    Article  PubMed  Google Scholar 

  106. Bansal, A. K., Truccolo, W., Vargas-Irwin, C. E. & Donoghue, J. P. Decoding 3D reach and grasp from hybrid signals in motor and premotor cortices: spikes, multiunit activity, and local field potentials. J. Neurophysiol. 107, 1337–1355 (2012).

    Article  PubMed  Google Scholar 

  107. Perel, S. et al. Single-unit activity, threshold crossings, and local field potentials in motor cortex differentially encode reach kinematics. J. Neurophysiol. 114, 1500–1512 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  108. Scherberger, H., Jarvis, M. R. & Andersen, R. A. Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron 46, 347–354 (2005).

    Article  CAS  PubMed  Google Scholar 

  109. Flint, R. D., Scheid, M. R., Wright, Z. A., Solla, S. A. & Slutzky, M. W. Long-term stability of motor cortical activity: implications for brain machine interfaces and optimal feedback control. J. Neurosci. 36, 3623–3632 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  110. Keshtkaran, M. R. et al. A large-scale neural network training framework for generalized estimation of single-trial population dynamics. Nat. Methods 19, 1572–1577 (2022).

  111. Pei, F. et al. Neural Latents Benchmark ‘21: evaluating latent variable models of neural population activity. In Proc. Neural Information Processing Systems Track on Datasets and Benchmarks https://openreview.net/forum?id=KVMS3fl4Rsv (2021).

  112. Pandarinath, C. et al. Neural population dynamics in human motor cortex during movements in people with ALS. eLife 4, e07436 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  113. Kalidindi, H. T. et al. Rotational dynamics in motor cortex are consistent with a feedback controller. eLife 10, e67256 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  114. Massey, P. V. & Bashir, Z. I. Long-term depression: multiple forms and implications for brain function. Trends Neurosci. 30, 176–184 (2007).

    Article  CAS  PubMed  Google Scholar 

  115. Nicoll, R. A. A brief history of long-term potentiation. Neuron 93, 281–290 (2017).

    Article  CAS  PubMed  Google Scholar 

  116. Chowdhury, R. H., Glaser, J. I. & Miller, L. E. Area 2 of primary somatosensory cortex encodes kinematics of the whole arm. eLife 9, e48198 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  117. Abbaspourazad, H., Hsieh, H.-L. L. & Shanechi, M. M. A multiscale dynamical modeling and identification framework for spike-field activity. IEEE Trans. Neural Syst. Rehabil. Eng. 27, 1128–1138 (2019).

    Article  PubMed  Google Scholar 

  118. Hsieh, H.-L., Wong, Y. T., Pesaran, B. & Shanechi, M. M. Multiscale modeling and decoding algorithms for spike-field activity. J. Neural Eng. 16, 016018 (2019).

  119. Wang, C. & Shanechi, M. M. Estimating multiscale direct causality graphs in neural spike-field networks. IEEE Trans. Neural Syst. Rehabil. Eng. 27, 857–866 (2019).

    Article  PubMed  Google Scholar 

  120. Bighamian, R., Wong, Y. T., Pesaran, B. & Shanechi, M. M. Sparse model-based estimation of functional dependence in high-dimensional field and spike multiscale networks. J. Neural Eng. 16, 056022 (2019).

    Article  PubMed  Google Scholar 

  121. Chen, J. C. et al. A wireless millimetric magnetoelectric implant for the endovascular stimulation of peripheral nerves. Nat. Biomed. Eng. https://doi.org/10.1038/s41551-022-00873-7 (2022).

  122. Williams, A. H. et al. Unsupervised discovery of demixed, low-dimensional neural dynamics across multiple timescales through tensor component analysis. Neuron https://doi.org/10.1016/j.neuron.2018.05.015 (2018).

  123. Trautmann, E. M. et al. Accurate estimation of neural population dynamics without spike sorting. Neuron 103, 292–308.e4 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  124. De Jong, P. & MacKinnon, M. J. Covariances for smoothed estimates in state–space models. Biometrika 75, 601–602 (1988).

    Article  Google Scholar 

  125. Yang, Y., Sani, O. G., Chang, E. F. & Shanechi, M. M. Dynamic network modeling and dimensionality reduction for human ECoG activity. J. Neural Eng. 16, 056014 (2019).

    Article  PubMed  Google Scholar 

  126. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

  127. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2017).

  128. Wan, E. A. & Van Der Merwe, R. The unscented Kalman filter for nonlinear estimation. In Proc. IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium 153–158 (IEEE, 2000).

Download references

Acknowledgements

We acknowledge the support of the NIH Director’s New Innovator Award DP2-MH126378 (to M.M.S.), NIH R01MH123770 (to M.M.S.) and NSF CRCNS Award IIS 2113271 (to M.M.S. and B.P.).

Author information

Authors and Affiliations

Authors

Contributions

H.A. and M.M.S. conceived the study and developed the new algorithms. H.A. performed all the analyses except for the grid task. E.E. performed the analyses for the grid task and the Lorenz system simulations, and contributed to code implementation and fLDS analyses. H.A. and M.M.S. wrote the paper. B.P. designed and performed the experiments for two of the non-human primate datasets. M.M.S. supervised the work.

Corresponding author

Correspondence to Maryam M. Shanechi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Biomedical Engineering thanks the anonymous reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Characterization of DFINE’s performance vs. the observation noise and the amount of training data.

a, Examples of noisy observations of the latent factor trajectory are shown in three different noise regimes. Figure convention is similar to Fig. 3b. b, The underlying trajectory reconstruction error is shown for the Swiss-roll manifold at various noise to signal ratios (nsr). Noise to signal ratio is calculated by dividing the standard deviation of the observation noise to that of the original signal. Each dot represents the mean reconstruction error across simulated sessions and cross-validation folds (n=50). The solid line represents the mean and the shaded area shows the 95% confidence bound of the mean. c, Similar to b for the ring-like manifold. d, Similar to b, for the Torus manifold. e, The cross-validated reconstruction error of the underlying trajectory is shown for the Swiss roll manifold at a given noise to signal ratio (0.5, corresponding to the right-most panel in a) vs. the number of training trials. DFINE’s performance converges when around 75 trials are available. Similar to b-d, each dot represents the mean reconstruction error across simulated sessions and cross-validation folds (n=50), the solid line represents the mean, and the shaded area shows the 95% confidence bound of the mean.

Extended Data Fig. 2 DFINE outperforms LDM and SAE in the 3D naturalistic reach-and-grasp task also when using the LFP modality instead of smoothed firing rates.

Neural and behaviour prediction accuracies of DFINE versus the benchmark methods are shown in a and b, respectively. Figure convention is similar to Fig. 5. In both neural and behaviour prediction accuracies, DFINE was again better than SAE (P < 2.7 × 10−5, Ns=35, one-sided Wilcoxon signed-rank test) while SAE was better than LDM (\(P < 2.5\times {10}^{-7}\), Ns=35, one-sided Wilcoxon signed-rank test).

Extended Data Fig. 3 DFINE outperforms fLDS across the four experimental datasets.

Neural and behaviour prediction accuracies were both significantly higher in DFINE compared with fLDS in the saccade task (a), the 3D naturalistic reach-and-grasp task (b), the 2D random-target reaching task (c), and the 2D grid reaching task (d). Figure convention is similar to Figs. 4 and 5. Beyond performance gains, as a major goal and benefit, DFINE provides the new capability for flexible inference in neural population activity unlike fLDS that performs non-causal inference and does not directly address missing data.

Extended Data Fig. 4 DFINE outperforms SAE in the presence of stochasticity in the nonlinear temporal dynamics of the Lorenz attractor system, while SAE outperforms DFINE when the dynamics are almost deterministic.

a, Examples of latent factor trajectories for the stochastic Lorenz attractor system across various dynamics noise magnitudes (quantified as the noise variance) (Methods). The arrows associate example trajectories with the comparisons in b to provide visualization. b, The cross-validated reconstruction accuracies of the ground-truth Lorenz latent factors for SAE and DFINE are shown for various dynamics noise magnitudes. The solid lines are the mean latent factor reconstruction accuracy across simulated systems and cross-validation folds (n=100). The shaded area around the mean represents the 95% confidence bound. Figure convention for the significance asterisks is similar to Fig. 4. For nonlinear temporal dynamics that are almost deterministic, SAE outperformed DFINE (dynamics noise magnitudes smaller than ~2.5 × 10−3). With the increase in the dynamics noise magnitude, SAE performance degraded while DFINE performance stayed robust. DFINE significantly outperformed SAE when there was stochasticity in nonlinear temporal dynamics and specifically when the dynamics noise magnitude was larger than ~10−2. DFINE’s inference explicitly takes into account stochasticity by incorporating the stochastic noise variables during inference; this helps DFINE perform well and robustly in the presence of stochasticity in the Lorenz dynamics here (Methods). The x axis is in log scale and shows the variance of the dynamics noise (Methods).

Extended Data Fig. 5 TDA analysis directly on the observed neural population activity reveals a ring-like manifold structure.

The ring-like manifold was found in the saccade task (a), the 3D naturalistic reach-and-grasp task (b), the 2D random-target reaching task (c), and the 2D grid reaching task (d). We performed TDA directly on the observed neural population activity \({{\bf{y}}}_{t}\in {{\mathbb{R}}}^{{{{n}}}_{{{y}}}\times 1}\) without any modelling and quantified the ratio between the length (duration between birth and death) of the most persistent one-dimensional hole to that of the second most persistent one-dimensional hole. This ratio was significantly larger than the control data in all datasets (a-d), again indicating the existence of a ring-like manifold structure in the data (even without any modelling). The control data is taken as a unit-norm Gaussian noise in \({{\mathbb{R}}}^{{{{n}}}_{{{y}}}\times 1}\) because we z-score each trial’s latent factors for the TDA analysis to remove scaling differences (Methods). The line inside boxes shows the median, box edges represent the 25th and 75th percentiles, and whiskers show the minimum and maximum values after removing the outliers. We removed outliers that were outside the 3-standard-deviation range from the mean on each side. Figure convention for asterisks is similar to Fig. 4.

Extended Data Fig. 6 DFINE outperforms LDM and SAE in the presence of missing observations and the improvement vs. SAE grows with more missing samples.

a, LDM, SAE and DFINE’s behaviour prediction accuracies across various observed datapoint ratios in the 3D naturalistic reach-and-grasp task. Figure convention is similar to that in Fig. 7. Given models trained on fully observed neural observations, these methods inferred the latent factors in the test set that had missing observations. LDM performed this inference with the Kalman filter/smoother. SAE did so either by imputing missing observations in the test set to zero as done previously53,54 (SAE zero imputation), or by imputing them to the average of the last and next/future seen observations (SAE average imputation). DFINE did so through its new flexible inference method. We then used the inferred factors in the test set to predict the behaviour variables. This process was done at 0.3 and 0.6 observed datapoint ratios. For all models, we show the behaviour prediction accuracy of the smoothed latent factors. DFINE’s behaviour prediction accuracy remains better than other models even in the lower observed datapoint ratios. b, The percentage drop in the behaviour prediction accuracy of the nonlinear models – SAE zero imputation, SAE average imputation and DFINE – as we vary the observed datapoint ratio from 1 to 0.3 and from 1 to 0.6. The percentage drop in behaviour prediction accuracy of DFINE is significantly lower than that of SAE with both imputation techniques, showing that DFINE can better compensate for missing observations. Figure convention for bars, dots and asterisks is similar to that in Fig. 4. Similar results held for the 2D random-target reaching task (c, d), and for the 2D grid reaching task (e, f).

Extended Data Fig. 7 Example behaviour trajectories for the four experimental datasets.

a, Eye movement trajectories for the saccade task. Each colour represents one target, that is, condition. b, 3D hand movement trajectories for the 3D naturalistic reach-and grasp task. Each colour represents one condition, that is, movement to the left or right. c,d, 2D cursor trajectories for the 2D random-target reaching task (c) and 2D grid reaching task (d) are shown, when shifted in space to start from the center. Each condition is shown with a different colour and represents reaches that have similar direction angles. Regardless of start or end position, the angle of movement specifies the 8 conditions, which correspond to movement angle intervals of 0–45, 45–90, 90–135, 135–180, 180–225, 225–270, 270–315, and 315–360, respectively.

Extended Data Fig. 8 Neural prediction accuracy of DFINE when varying nx and na.

Neural prediction accuracy for the saccade task (a), 3D naturalistic reach-and-grasp task (b), 2D random-target reaching task (c), and 2D grid reaching task (d), for various pairwise values of (nx, na). This analysis shows that taking nx = na comes at no loss of generality in the neural prediction accuracy as neural prediction accuracy increased the most by increasing nx and na together (Methods). Since both the dynamic and the manifold latent factors can be the bottleneck for information, it intuitively makes sense to increase their dimensionality together for maximum performance and least computational complexity for dimension search.

Supplementary information

Supplementary Information

Supplementary Figures, Tables, Notes and References.

Reporting Summary

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abbaspourazad, H., Erturk, E., Pesaran, B. et al. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat. Biomed. Eng 8, 85–108 (2024). https://doi.org/10.1038/s41551-023-01106-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41551-023-01106-1

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics