Information Theory started and, according to some, ended with Shannon’s seminal paper “A Mathematical Theory of Communication” (Shannon 1948). Because its significance and flexibility were quickly recognized, there were numerous attempts to apply it to diverse fields outside of its original scope. This prompted Shannon to write his famous essay “The Bandwagon” (Shannon 1956), warning against indiscriminate use of the new tool. Nevertheless, non-standard applications of Information Theory persisted.

Very soon after Shannon’s initial publication (Shannon 1948), several manuscripts provided the foundations of much of the current use of information theory in neuroscience. MacKay and McCulloch (1952) applied the concept of information to propose limits of the transmission capacity of a nerve cell. This work foreshadowed future work on what can be termed “Neural Information Flow”—how much information moves through the nervous system, and the constraints that information theory imposes on the capabilities of neural systems for communication, computation and behavior. A second set of manuscripts, by Attneave (1954) and Barlow (1961), discussed information as a constraint on neural system structure and function, proposing that neural structure in sensory system is matched to statistical structure of the sensory environment, in a way to optimize information transmission. This is the main idea behind the “Structure from Information” line of research that is still very active today. A third thread, “Reliable Computation with Noisy/Faulty Elements”, started both in the information-theoretic community (Shannon and McCarthy 1956) and neuroscience (Winograd and Cowan 1963). With the advent of integrated circuits that were essentially faultless, interest began to wane. However, as IC technology continues to push towards smaller and faster computational elements (even at the expense of reliability), and as neuromorphic systems are developed with variability designed in (Merolla and Boahen 2006), this topic is gaining in popularity again in the electronics community, and neuroscientists again may have something to contribute to the discussion.

1 Subsequent developments

The theme that arguably has had the widest influence on the neuroscience community, and is most heavily represented in the current special issue of JCNS, is that of “Neural Information Flow”. The initial works of MacKay and McCulloch (1952), McCulloch (1952) and Rapoport and Horvath (1960) showed that neurons are in principle able to relay large quantities of information. This research lead to the first attempts to characterize the information flow in specific neural systems (Werner and Mountcastle 1965), and also started the first major controversy in the field, which still resonates today: the debate about timing versus frequency codes (Stein 1967; Stein et al. 1972). A steady stream of articles followed, both discussing these hypothesis and attempting to clarify the type of information relayed by nerve cells (Abeles and Lass 1975; Eagles and Purple 1974; Eckhorn and Pöpel 1974; Eckhorn et al. 1976; Harvey 1978; Lass and Abeles 1975; Norwich 1977; Poussart 1971; Stark et al. 1969; Taylor 1975; Walloe 1970).

After the initial rise in interest, the application of Information Theory to neuroscience was extended to a few more systems and questions (Eckhorn and Pöpel 1981; Eckhorn and Querfurth 1985; Fuller and Williams 1983; Kjaer et al. 1994; Lestienne and Strehler 1987, 1988; Optican and Richmond 1987; Surmeier and Weinberg 1985; Tsukuda et al. 1984; Victor and Johanessma 1986), but did not spread too broadly. This was presumably because, despite strong theoretical advances in Information Theory, its applicability was hampered by difficulty in measuring and interpreting information-theoretic quantities.

The work of de Ruyter van Steveninck and Bialek (1988) started what could be called the modern era of information-theoretic analysis in neuroscience, in which Information Theory is seeing more and more refined applications. Their work advanced the conceptual aspects of the application of information theory to neuroscience and, subsequently, provided a relatively straightforward way to estimate information-theoretic quantities (Strong et al. 1998). This work provided an approach to removing biases in information estimates due to finite sample size, but the scope of applicability of their approach was limited. The difficulties in obtaining unbiased estimates of information-theoretic quantities were noted early on by Carlton (1969) and Miller (1955) and brought again to attention by Treves and Panzeri (1995). However, it took the renewed interest generated by Strong et al. (1998) to spur the research that eventually resolved them. Almost simultaneously, several groups provided the neuroscience community with robust estimators valid under different conditions (Kennel et al. 2005; Nemenman et al. 2004; Paninski 2003; Victor 2002). This diversity proved important, as Paninski (2003) proved an inconsistency theorem showing that most common estimation techniques can encounter conditions that lead to arbitrary poor estimates.

At the same time, several other branches of Information Theory saw application in neuroscience context. The introduction of techniques stemming from work on quantization and lossy compression (Gersho and Gray 1991) provided lower bounds of information-theoretic quantities and ideas about inference based on them (Dimitrov and Miller 2001; Samengo 2002; Tishby et al. 1999). Furthermore, a large class of neuron models were characterized as samplers that, under appropriate conditions, faithfully encode sensory information in the spike train (time encoding machines, Lazar 2004). The class of neurons includes integrate-and-fire (Lazar and Pnevmatikakis 2008), threshold-and-fire (Lazar et al. 2010) and Hodgkin-Huxley neurons (Lazar 2010).

2 Current state

The work presented in the current issue builds on the developments in both information-theoretic and experimental techniques.

Several of the contributions apply a recent developments in Information Theory - directed information—to clarifying the structure of biological neural networks from observations of their activity. The works originate form the ideas of Granger (1969) on causal interactions, and was placed in an information-theoretic perspective by Massey (1990), Massey and Massey (2005) and Schreiber (2000). Amblard and Michel (2010) here merge the two ideas and extract Granger causality graphs by using directed information measures. They show that such tools are needed to analyze the structure of systems with feedback in general, and neural systems specifically. The authors also provide practical approximations with which to estimate these structures. Quinn et al. (2011) present a novel, nonlinear robust extension of the linear Granger tools and use it to infer the dynamics of neural ensembles based on physiological observations. In particular, the procedure uses point process models of neural spike trains, performs parameter and model order selection with minimal description length, and is applied to the analysis of interactions in neuronal assemblies in the primary motor cortex (MI) of macaque monkeys. Vicente et al. (2011) investigate transfer entropy (TE) as an alternative measure of effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. The authors demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction. Using neocortical column neuronal network simulations, Neymotin et al. (2011) demonstrate that networks with greater internal connectivity reduce input/output correlations from excitatory synapses and decrease negative correlations from inhibitory synapses. These changes were measured by normalized TE. Lizier et al. (2011) take the TE idea further and apply it to functional imaging data to determine the direction of information flow between brain regions. As a proof of principle, they show that this approach enables the identification of a hierarchical (tiered) network of cerebellar and cortical regions involved in motor tasks, and how the functional interactions of these regions are modulated by task difficulty.

As mentioned above, the “structure from information” theme is represented by a robust stream of investigations, centered on the notion that neural circuitry exploits the statistical features of the environment to enable efficient coding of sensory stimuli (Barlow 1961). The overwhelming majority of studies concern the implications of this principle for processing at a neuronal level (e.g., Atick and Redlich 1990). In this issue, however, Vanni and Rosenström (2011) consider a larger scale of organization. Through the use of functional imaging techniques to assess the distribution of activity levels in neuronal populations in visual cortex, they show that context effects can be interpreted as means to achieve decorrelation, and hence, efficient coding.

The use of information-theoretic tools to probe the structure of network firing patterns has recently been the subject of great interest, sparked by work from two laboratories (Shlens et al. 2006, 2009; Schneidman et al. 2006; for a review see 2007) which showed that retinal firing patterns are very well-described by a pairwise maximum-entropy model. Not surprisingly (given its complex local circuitry), the pairwise model fails in cortex (Ohiorhenuan et al. 2010); the present contribution from Ohiorhenuan and Victor (2011) shows that the deviations from the pairwise model are highly systematic, and how they can be characterized.

Several contributions also engage classic information- theoretic tools from channel and source coding. Kim et al. (2011) report for the first time that the responses of olfactory sensory neurons in fruit flies expressing the OR59b receptor is very precise and reproducible. The response of these neurons depends not only on the odorant concentration, but also on the rate of change. The authors demonstrate that a two-dimensional encoding manifold in a concentration-concentration gradient space provides a quantitative description of the neuron’s response. By defining a distance measure between spike trains, Gillespie and Houghton (2011) present a novel method for calculating the information channel capacity of spike trains channels. As an example, they calculate the capacity of a data set recorded from auditory neurons in zebra finch. Dimitrov et al. (2011) present a twofold extension of their Information Distortion method (Dimitrov and Miller 2001) as applied to the problem of neural coding. On the theoretical side, they develop the idea of joint quantization that provides optimal lossy compressions of the stimulus and response spaces simultaneously. The practical application to neural coding in the cricket cercal system introduces a family of estimators which provide greater flexibility of the method for different systems and experimental conditions.

And finally, Lewi et al. (2011) use Information Theory to approach the question “What are the best ways to study a biological system?”, when the goal is to characterize the system as efficiently as possible. Tzanakou et al. (1979) initially posed it in the form “Which is the ‘best’ stimulus for a sensory system?”. More recently, Machens (2002) rephrased the problem into “What is the best stimulus distribution that maximizes the information transmission in a sensory system?”. This paper takes the authors’ general formulation (Paninski 2005), “search for stimuli that maximize the information between system description and system observations,” extends it to continuous stimuli, and applies it to the analysis of the auditory midbrain of zebra finches.

In conclusion, Information Theory is thriving in the neuroscience community, and the long efforts are bearing fruit, as diverse research questions are being approached with more elaborate and refined tools. As demonstrated by this issue and the complementary compilation of the Information Theory Society (Milenkovic et al. 2010), Information Theory is firmly integrated in the fabric of neuroscience research, and a progressively wider range of biological research in general, and will continue to play an important role in these disciplines. Conversely, neuroscience is starting to serve as a driver for further research in Information Theory, opening interesting new directions of inquiry.