Skip to main content

Computational Neuroscience of Neuronal Networks

  • Reference work entry
  • 705 Accesses

Abstract

A fundamental challenge in modeling the brain is how to deal with the enormous complexity that exists at multiple spatial and temporal scales. This complexity is already encountered at the level of the single neuron, but it becomes still more apparent, and daunting, when creating models at the level of neuronal networks, a term that generally describes models on the scale of tens to millions of connected neurons. Complexity occurs across scales. Rapid interneuronal communication manifests on scales of microns and milliseconds, while learning across cortex may take place on scales of centimeters and on temporal scales of hours, days, and years. It is impossible for a single model to include all complexity at all levels, so decisions must be made as to what is to be left out of a given model. Part of the simplification process involves model embedding while crossing multiple scales (multiscale modeling). This chapter describes that the basic model of the neuron that is used in networks is greatly simplified from the complex models that we developed in the prior chapter. Multiple other simplifications are also made: using only a restricted number of cell types (excitatory and inhibitory; fast and slow) instead of the many types that exist; leaving out large parts of the neuron function – for example, the axon; and assuming that the network is being driven rather than creating activity through its own mechanisms.

Once a sufficiently simplified network model is obtained, a variety of mathematical, data-mining, and visualization tools can be used to investigate its properties. Oscillations in a network can be investigated by frequency spectra. Activity synchrony can be measured by looking at various correlation tools. The structure of a network can be analyzed through graph theory, by considering properties as path length, centrality, and clustering. Communication in a network can be analyzed through information theory, which quantifies the extent to which neurons influence and transfer information to other neurons.

Much computer modeling involves exploration and experimentation on the simulations. Through in silico experimentation, the researcher refines and studies these complex models. Different researchers make very different choices about which simplifications are justified and how best to analyze and study their networks. Therefore, a large variety of models exist. There is no single best way to do network modeling.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   1,699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Further Reading

  • Anderson JA (ed) (1995) An introduction to neural networks. MIT Press, Cambridge

    Google Scholar 

  • Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8:183–208

    Article  PubMed  CAS  Google Scholar 

  • Hertz J, Krogh A, Palmer RG (1991) Introduction to the theory of neural computation. Addison-Wesley, Redwood City

    Google Scholar 

  • Izhikevich EM (2003) Simple model of spiking neurons. Neural Networks, IEEE Transactions on 14(6):1569–1572

    Google Scholar 

  • Izhikevich EM (2010) Dynamical systems in neuroscience: the geometry of excitability and bursting. MIT Press, Cambridge, MA

    Google Scholar 

  • Koch C (2004) The quest for consciousness: a neurobiological approach. Roberts and Company Publishers, Englewood, CO

    Google Scholar 

  • Koch C, Segev I (1998) Methods in neuronal modeling: from synapse to networks, 2nd edn. MIT Press, Cambridge, MA

    Google Scholar 

  • Lytton WW (2002) From computer to brain. Springer, New York

    Google Scholar 

  • Neymotin SA, Lee HY, Park EH, Fenton AA, Lytton WW (2011a) Emergence of physiological oscillation frequencies in a computer model of neocortex. Frontiers in Computational Neuroscience 5:19

    Article  PubMed  Google Scholar 

  • Neymotin SA, Lazarewicz MT, Sherif M, Contreras D, Finkel LH, Lytton WW (2011b) Ketamine disrupts theta modulation of gamma in a computer model of hippocampus. J Neurosci 31(32):11733–11743

    Article  PubMed  CAS  Google Scholar 

  • O’Reilly R, Munakata Y, McClelland JL (2000) Computational explorations in cognitive neuroscience. MIT Press, Cambridge, MA

    Google Scholar 

  • Sporns O (2010) Networks of the brain. MIT Press, Cambridge, MA

    Google Scholar 

  • Tufte ER (1990) Envisioning information. Graphics Press, Cheshire, CT

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel A. Neymotin M.Sc. .

Editor information

Editors and Affiliations

Glossary

Dynamics

The study of how a process changes over time.

ING

Interneuron network gamma – gamma oscillations generated by the activity of interneurons.

PING

Pyramidal interneuron network gamma – gamma oscillations generated by the interactions of pyramidal and inhibitory neurons.

Raster plot

Plot showing spiking of cells – vertical axis is cell identity and horizontal axis is time.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media, LLC

About this entry

Cite this entry

Neymotin, S.A., Mathew, A., Kerr, C.C., Lytton, W.W. (2013). Computational Neuroscience of Neuronal Networks. In: Pfaff, D.W. (eds) Neuroscience in the 21st Century. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-1997-6_87

Download citation

Publish with us

Policies and ethics