Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-c4f8m Total loading time: 0 Render date: 2024-04-24T11:07:48.218Z Has data issue: false hasContentIssue false

Bayesian inference for latent stepping and ramping models of spike train data

from Part I - State space methods for neural data

Published online by Cambridge University Press:  05 October 2015

K. W. Latimer
Affiliation:
The University of Texas at Austin
A. C. Huk
Affiliation:
The University of Texas at Austin
J. W. Pillow
Affiliation:
Princeton University
Zhe Chen
Affiliation:
New York University
Get access

Summary

Background: dynamics of neural decision making

A fundamental challenge in neuroscience is to understand how decisions are computed in neural circuits. One popular approach to this problem is to record from single neurons in brain regions that lie between primary sensory and motor regions while an animal performs a perceptual decision-making task. Typical tasks require the animal to integrate noisy sensory evidence over time in order to make a binary decision about the stimulus. Such experiments have the tacit goal of characterizing the dynamics governing the transformation of sensory information into a representation of the decision. However, recorded spike trains do not reveal these dynamics directly; they represent noisy, incomplete emissions that reflect the underlying dynamics only indirectly.

This dissociation between observed spike trains and the unobserved dynamics governing neural population activity has posed a key challenge for using neural measurements to gain insight into how the brain computes decisions. Recording decision-related neural activity has certainly shed much light upon what parts of the brain are involved in forms of decision making and what sorts of roles each area plays. But without direct access to the dynamics underlying single-trial decision formation, most analyses of decision-related neural data rely on estimating spike rates by averaging over trials (and sometimes, over neurons as well). Although the central tendency is of course a reasonable starting point in data analysis, sole reliance on the mean can obscure single-trial dynamics when substantial stochastic components are present. For example, as discussed in depth in this chapter, the average of a set of step functions – when the steps occur at different times on different trials – will yield an average that ramps continuously, masking the presence of discrete dynamics. Although the majority of averaging and regression-based analyses used in the field are straightforward to conceptualize and easy to apply to data, they provide limited insight into the dynamics that may govern how individual decisions are made. State space methods, on the other hand, are particularly well-suited for analyzing the neural representation of decisions (or other cognitive variables). The latent state can account for unobserved, trial varying dynamics, and the dynamics placed on the state can be directly linked to models of decision making.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2015

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bollimunta, A., Totten, D. & Ditterich, J. (2012). Neural dynamics of choice: single-trial analysis of decision-related activity in parietal cortex. Journal of Neuroscience 32, 12684–12701.Google Scholar
Brunton, B. W., Botvinick, M. M. & Brody, C. D. (2013). Rats and humans can optimally accumulate evidence for decision-making. Science 340, 95–98.Google Scholar
Buesing, L., Macke, J. H. & Sahani, M. (2012). Learning stable, regularised latent models of neural population dynamics. Network: Computation in Neural Systems 23, 24–47.Google Scholar
Casella, G. & Robert, C. P. (1999). Monte Carlo Statistical Methods, New York: Springer-Verlag.
Chib, S. (1995). Marginal likelihood from the Gibbs output. Journal of American Statistical Association 90, 1313–1321.Google Scholar
Chib, S. & Jeliazkov, I. (2001). Marginal likelihood from the Metropolis–Hastings output. Journal of the American Statistical Association 96, 270–281.Google Scholar
Churchland, A. K., Kiani, R., Chaudhuri, R., Wang, X. -J., Pouget, A. & Shadlen, M. N. (2011). Variance as a signature of neural computations during decision making. Neuron 69(4), 818–831.Google Scholar
Ditterich, J. (2006). Stochastic models of decisions about motion direction: behavior and physiology. Neural Networks 19(8), 981–1012.Google Scholar
Durstewitz, D. & Deco, G. (2008). Computational significance of transient dynamics in cortical networks. European Journal of Neuroscience 27, 217–227.Google Scholar
Escola, S., Fontanini, A., Katz, D. B. & Paninski, L. (2011). Hidden Markov models for the stimulus-response relationships of multistate neural systems. Neural Computation 23(5), 1071–1132.Google Scholar
Girolami, M. & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73(2), 123–214.Google Scholar
Gordon, N., Salmond, D. & Smith, A. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings F: Radar and Signal Processing 140(2), 107.Google Scholar
Guédon, Y. (2007). Exploring the state sequence space for hidden Markov and semi-Markov chains. Computational Statistics & Data Analysis 51(5), 2379–2409.Google Scholar
Huk, A. C. & Meister, M. L. R. (2012). Neural correlates and neural computations in posterior parietal cortex during perceptual decision-making. Frontiers in Integrative Neuroscience 6, 86.Google Scholar
Link, S. W. (1975). The relative judgment theory of two choice response time. Journal of Mathematical Psychology 12(1), 114–135.Google Scholar
Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84.Google Scholar
Mazurek, M. E. (2003). A role for neural integrators in perceptual decision making. Cerebral Cortex 13(11), 1257–1269.Google Scholar
Miller, P. & Katz, D. B. (2010). Stochastic transitions between neural states in taste processing and decision-making. Journal of Neuroscience 30(7), 2559–2570.Google Scholar
Park, I. M., Meister, M. L. R., Huk, A. C. & Pillow, J. W. (2014). Encoding and decoding in parietal cortex during sensorimotor decision-making. Nature Neuroscience 17, 1395–1403.Google Scholar
Purcell, 0B. A., Heitz, R. P., Cohen, J. Y., Schall, J. D., Logan, G. D. & Palmeri, T. J. (2010). Neurally constrained modeling of perceptual decision making. Psychological Review 117(4), 1113–1143.Google Scholar
Ratcliff, R. & Rouder, J. N. (1998). Modeling response times for two-choice decisions. Psychological Science 9(5), 347–356.Google Scholar
Roberts, G. O. & Stramer, O. (2002). Langevin diffusions and Metropolis–Hastings algorithms. Methodology and Computing in Applied Probability 4(4), 337–357.Google Scholar
Roitman, J. D. & Shadlen, M. N. (2002). Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. Journal of Neuroscience 22, 9475–9489.Google Scholar
Seidemann, E., Meilijson, I., Abeles, M., Bergman, H. & Vaadia, E. (1996). Simultaneously recorded single units in the frontal cortex go through sequences of discrete and stable states in monkeys performing a delayed localization task. Journal of Neuroscience 16, 752–768.Google Scholar
Shadlen, M. N. & Kiani, R. (2013). Decision making as a window on cognition. Neuron 80(3), 791–806.Google Scholar
Smith, P. L. & Ratcliff, R. (2004). Psychology and neurobiology of simple decisions. Trends in Neurosciences 27(3), 161–168.Google Scholar
Spiegelhalter, D. J., Best, N. G., Carlin, B. P. & van der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 64(4), 583–639.Google Scholar
Stevenson, I. H. & Kording, K. P. (2011). How advances in neural recording affect data analysis. Nature Neuroscience 14(2), 139–142.Google Scholar
Tokdar, S., Xi, P., Kelly, R. C. & Kass, R. E. (2010). Detection of bursts in extracellular spike trains using hidden semi-Markov point process models. Journal of Computational Neuroscience 29, 203–212.Google Scholar
Wald, A. (1973). Sequential Analysis, New York: Dover.
Wiecki, T. V., Sofer, I. & Frank, M. J. (2013). HDDM: hierarchical Bayesian estimation of the drift-diffusion model in Python. Frontiers in Neuroinformatics 7, 14.Google Scholar
Yuan, K., Girolami, M. & Niranjan, M. (2012). Markov chain Monte Carlo methods for statespace models with point process observations. Neural Computation 24(6), 1462–1486.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×