ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article
Revised

Logarithmic distributions prove that intrinsic learning is Hebbian

[version 2; peer review: 2 approved]
PUBLISHED 11 Oct 2017
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the INCF gateway.

Abstract

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.

Keywords

neural coding, synaptic weights, Hebbian learning, intrinsic excitability, rate coding, spike frequency, neural circuits, neural networks, lognormal distributions.

Revised Amendments from Version 1

Thanks to the reviewers for the thorough review and many thoughtful comments. I made two major revisions:

  1. I added a few sentences to explain why a lognormal distribution can be considered to be the best fit to the data in the Results section 3.1. I also added a sentence about how differences in bursting and irregularity, do not affect spike rate distribution (Mochizuki et al., 2016)
     
  2. Considerably more material and references on Hebbian learning is now included in the Introduction (last but one paragraph), including many new references, including Peterson and Berg’s 2016 paper
I decided to leave the Methods section intact; the data analysis is tedious, and it disrupts the flow of an essentially simple argument. Therefore, I leave it in the Methods section available for those who want to dig deeper.
I also made a few minor word changes in the Abstract and Introduction, and improved the colouring in Figure 10 for better visibility.

To read any peer review reports and author responses for this article, follow the "read" links in the Open Peer Review table.

1 Introduction

Individual neurons have very different, but mostly stable, mean spike rates under a variety of conditions1,2. To report on behavioral results, spike counts are often normalized with respect to the mean for each neuron. But this obscures an important question: Why do neurons within a tissue operate at radically different levels of output frequency? In order to answer this question our approach is twofold: (a) we try to document this phenomenon for different neural tissues and behavioral conditions. We also examine neural properties for their distribution, namely intrinsic gains and synaptic weights; (b) we build a very generic neural model to explore the conditions for generating and maintaining these distributions. First, we give examples for the distribution of mean spike rates for principal neurons under spontaneous conditions, as well as in response to stimuli. We furthermore document distributions for intrinsic excitability35 for cortical and striatal neurons, as well as synaptic weight distributions611.

With the current data, we show that the distribution of spike rates within any neural tissue follows a power-law distribution, i.e. a distribution with a ‘heavy tail’. There is also a small number of very low-frequency neurons, so that we have a lognormal distribution2. This lognormal distribution is present in spontaneous spike rates as well as under behavioral stimulation. For each neuron, the deviation from the mean rate attributable to a stimulus is small (CV = 0.3–1, standard deviation = 1–4 spikes/s), when compared to the variability in mean spike rate over the whole population (5-7-fold), cf. Table 1 and Table 3.

Table 1. Statistics of spike rate distributions in different tissues.

TissueMean
μ
Median
μ*
Variance
σ2
Width
σ*
Mode
eμσ2
n
IT cortex391.54.500.712.322.2100
A1 cortex21.64.950.471.983.1145
A1 cortex403.327.110.692.2913.6263
Purkinje in vitro423.4631.820.141.4627.6106
Purkinje in vitro413.4431.190.1981.5630.034
Purkinje in vivo433.533.120.471.9820.7319
Inferior colliculus453.3127.390.902.5811.1330

This work refers back to data initially reported in 1. At the time, we only had data on spike rates of cortical neurons available, plus independent evidence on intrinsic properties of striatal neurons. The observation on cortical data was taken up by2,12, and led to a number of papers13,14 focusing on the power-law distribution of spike rates as a cortical phenomenon, seeking explanations in the recurrent excitatory connectivity of cortical tissue2,13,15,16. However, we find the same spike rate distributions for midbrain nuclei, medium striatal neurons and cerebellar Purkinje cells, which do not have this kind of connectivity. It has even been found in the spinal motor networks of turtles17. We then extended the data search for intrinsic excitability and found that lognormal distributions are ubiquitous there as well, at least in cortical as well as in striatal tissues. Finally, lognormal distributions have also been found for synaptic weights611,18. The explanation for this universal phenomenon must lie elsewhere.

For this purpose we constructed a generic model for neuronal populations with adaptable weights and gains. We initialized both weights and gains with uniform, Gaussian or lognormal distributions. We then employed either Hebbian or homeostatic adaptation rules on both. Under a variety of conditions we could show that lognormal distributions develop from any initial distribution only with Hebbian (positive) adaptivity. Additional homeostatic adaptation stabilized learning but erased the lognormal distributions if it was stronger than Hebbian adaptation. We could even show that the widths of the distributions from the model match with the experimental data for rates, weights, and gains (Table 1 and Table 3) that we have available. Lognormal distributions can only be maintained by positive, Hebbian-type learning rules15, while homeostatic plasticity alone destroys lognormal distributions19. There are a number of different learning rules and variants which all follow the ’Hebbian’ principle: strong activation leads to strengthening, weak activation leads to weakening20. STDP rules are a variant of Hebbian learning for spiking neurons, which emphasize temporal sequence, but have the same positive learning effect2123. It has been noticed that positive learning rules lead to run-away activation and unstable network behavior, and that they need to be counteracted by homeostatic processes24,25. We present a generalized model of synaptic learning which consists of both Hebbian (positive) and homeostatic (negative) adaptation rules24, and show that positive (Hebbian) learning is necessary to establish a lognormal synaptic weight distribution.

For intrinsic learning it has often been assumed that it may implement purely homeostatic adaptation2630, but see also31. Experimental results are often inconsistent4,3238. We will present results for a lognormal gain distribution in a number of tissues. It will be shown by simulation that the same principle holds: only a Hebbian, positive learning rule is capable of maintaining lognormal distributions, while homeostatic adaptation serves to establish stability. This finally answers the question that experimental researchers have investigated for some time: Is intrinsic plasticity mostly homeostatic, i.e. adjusts values inversely to use, or is there Hebbian, positive learning involved: when a neuron fires, does its gain increase? The answer is that the attested distribution of intrinsic gain can only derive from a Hebbian style adjustment rule, even though additional homeostatic adaptation is possible. Intrinsic plasticity is Hebbian.

2 Methods

In this section we report on data collection for spike rates, intrinsic properties and synaptic weights. Secondly, we explain the simulation model we constructed to explore the generation and persistence of the attested distributions.

2.1 Experimental data

We analyze five data sets for spike rates from principal neurons under behavioral activation:

  • 1. inferior temporal (IT) cortex from monkeys39

  • 2. primary auditory cortex (A1) from monkeys40

  • 3. primary auditory cortex (A1) of rats2

  • 4. Purkinje cells in cerebellum4144

  • 5. midbrain principal cells from inferior colliculus (IC) from the guinea pig45

In monkey IT, single unit activity was recorded over 200ms for passive viewing of 77 different natural stimuli for 100 neurons, each stimulus shown 10 times39. This yielded 770 spike rate response data points per neuron. For these data, we show mean spike rate, standard deviation, max-min values, coefficient of variation (CV) and Fano factor (FF) (Figure 1), (cf. 1). What is remarkable is that the dispersion for each neuron (variance related to mean, FF) is fairly constant, and not related to the rank of a neuron as high or low-frequency. In other words, neurons have roughly similar behavioral responses relative to their average spike rate. For this reason, many behavioral experiments have reported percentage of increase/decrease of spiking as the relevant parameter.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure1.gif

Figure 1. Spike rate data for neurons from inferotemporal cortex (IT) in monkeys39.

100 neurons, passive viewing of 77 stimuli, 10 trials (770 data points per neuron), data collected over 200ms. The data are shown for each neuron, where neurons are sorted by mean spike count. A: Mean spike rates (blue), standard deviation (red), and minimum/maximum absolute values (green). B: Mean spike rates histogram shows a lognormal distribution (σ*=2.32). C: Distributions of standard deviation (green) and CV (blue) have linear slopes, with small variation. The Fano factor (red), measuring the dispersion for each neuron, is nearly constant at about 2.

Additionally, we show data from primary auditory cortex (A1) from awake monkeys, which were recorded for spike responses to a 50ms, 100ms, or 200ms pure tone (40, Figure 2A and B) and data for spike rates from the primary auditory cortex of rats for four different conditions, which were recorded as cell-attached in vivo recordings (2, Figure 2C).

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure2.gif

Figure 2. Responses to pure tones in primary auditory cortex from awake monkeys40 and firing rates from primary auditory cortex in rats2.

A: Distribution of spike rates to a 50ms tone (n = 119, red) 100ms tone (n = 115, blue), or 200ms tone (n = 23, green) in primary auditory cortex in monkeys40. B: Histogram for spike rate distribution for the 100ms tone response (n = 115) fitted by an exponential (red) or lognormal (blue) distribution40. C: Spontaneous spike rate distribution from primary auditory cortex in unanesthetized rats2 fitted by an exponential (red) or lognormal (blue) distribution. Note that the spontaneous firing rates are much lower and narrower distributed than evoked spikes in response to stimuli at short time scales (B), but that they still follow a lognormal distribution.

For midbrain nuclei neurons (IC), we re-analyzed spike rates in response to tones (for 200ms after stimulus onset) under variations of binaural correlation45. The frequency ranking of neurons by mean spike rate, standard deviation, min-max values, CV and FF are shown in (Figure 3A–C). CV and FF are similar to the cortical data. Data from GABAergic cerebellar Purkinje cells offer some difficulty for this analysis since they have regular single spikes at high frequencies, and in addition, calcium-based complex spikes43. Complex spike rates however are low (< 1Hz). This can therefore be regarded as a form of multiplexing, with two separate codes, where single spike rates can be separately assessed in their distribution. Here we report data for single spikes from in vivo recordings in anesthetized rats43 (Figure 4A) and data from spontaneous spiking (in the absence of synaptic stimulation) under in vitro conditions (41, 42, Figure 4B and C).

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure3.gif

Figure 3. Neuronal response to binaural stimulation for inferior colliculus of the guinea pig45 (n = 30), data collected over 100 ms, 200–500 trials.

The data are shown for each neuron, with neurons sorted by mean spike count. A: Mean spike rates (blue), standard deviation (red), and minimum/maximum absolute values (green). B: Mean spike rates histogram shows a lognormal distribution. C: Distributions of standard deviation (green), CV (blue) and FF (red). Again, the dispersion is fairly constant.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure4.gif

Figure 4. Spike rates for cerebellar Purkinje cells from rats4144.

A: Data for single spikes for Purkinje cells recorded from anesthetized rats (spontaneous in vivo)43 (n = 346). B: Data for spike frequencies of isolated cell bodies of mouse Purkinje cells in vitro41 (n = 34). C: Spontaneous spike rates for Purkinje cells in slices (n = 106)42. D: Spike counts per neuron from 41 (n = 34), together with variability data from 44 (n = 2).

In order to show values for standard deviation and variance, data for two Purkinje cells from a behavioral experiment,44, i.e. single spike rates during arm movements from monkeys, have been added to the ranking of spontaneously firing neurons by mean spike rate from 41 (Figure 4D).

The logarithmic (heavy-tailed) distribution of spike rates is evident under all conditions.

The distribution of spike rates for neurons spiking in the absence of synaptic input shows that there are differences in the intrinsic excitability of neurons. To further explore this we looked at three additional datasets, which report the action potential firing of a cell due to injected current, such as by a constant pulse. This can be defined as the neuronal gain parameter (spike rate divided by current, [Hz/nA]).

  • 1. medium striatal neurons in slices from rat dorsal striatum and nucleus accumbens shell (NAcb shell)3, cf. 1, Figure 5.

  • 2. cortical neurons in cat area 17 in vivo46, Figure 6A and B

  • 3. striatal neurons from globus pallidus (GP) from awake rats5, Figure 6C

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure5.gif

Figure 5. Spike rate and gain distributions in basal ganglia3.

A: Spike rate in response to a 300ms constant current pulse at 180pA (blue), 200pA (green), and 220pA (red) for neurons in dorsal striatum (n = 28, solid lines) and nucleus accumbens shell (n = 24, dashed lines). B: Gain (Hz/nA) for neurons in dorsal striatum (n = 28). C: Gain (Hz/nA) for neurons in NAcb shell (n = 24).

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure6.gif

Figure 6. Gain [Hz/nA] for cortical and striatal neurons.

A: Gain for all types of cortical neurons in vivo (n = 220)46. B: Gain for fast spiking cortical neurons only (n = 33)46. C: Gain for neurons in globus pallidus (GP) in response to a +100pA current pulse (n = 145)5.

In 1, we already presented the data from striatum, which show that the spike response to a constant current follows a heavy-tailed distribution3. Figure 5A shows the spike rate in response to current pulses of different magnitude in two different areas, nucleus accumbens (NAcb) shell and dorsal striatum. Figure 5B and C show the distribution of rheobase (current-to-threshold) for dorsal striatal and NAcb shell neurons. Distributions appear mostly lognormal, with the exception of the 200pA current pulse response and the data in Figure 5C, which appear normally distributed.

We extend this dataset by recordings from different types of cortical neurons in cat area 17 in vivo (46, Figure 6A and B) and from GP in awake rats (5, Figure 6C).

A lognormal distribution of intrinsic gain is clearly apparent, except for fast-spiking interneurons, which, however, may be a sampling error (n=33).

Synaptic weight distributions have been investigated starting with10 in hippocampus by measuring EPSC magnitude6,4750 (Figure 7). There is also a review paper available51 to summarize the findings. Recently, the expression of AMPA receptor subunit GluA1, which is correlated with spine size, has also been measured52, Figure 8. We used five datasets from cortex, hippocampus and cerebellum:

  • 1. EPSPs for deep-layer pyramidal-pyramidal cell connections in rat visual cortex6,47

  • 2. EPSP amplitudes for deep-layer excitatory neuron connections in somatosensory cortical slices of juvenile rats49

  • 3. EPSP amplitudes for CA1 to CA3 connections in guinea pig hippocampal slices10

  • 4. EPSCs for granule cells to Purkinje cells in adult rat cerebellar slices50

  • 5. labeled GluA1 AMPA receptor subunit in mouse somatosensory barrel cortex52

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure7.gif

Figure 7. Strengths of EPSPs in cortex, hippocampus and cerebellum6,10,47,49,50.

A: Cortex: Deep-layer (L5) pyramidal-pyramidal cell connections6,47. B: Cortex: Deep-layer (L5) pyramidal-pyramidal cell connections49. C: Hippocampus: CA1 to CA3 connections10. D: Cerebellum: Granule cells to Purkinje cells50.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure8.gif

Figure 8. AMPA subunit distribution as a marker of synaptic weight.

A: Expression of labeled GluA1 AMPA receptor subunit in layer 2/3 mouse barrel cortex in vivo follows a lognormal distribution (σ* = 2.59, µ* = 0.32, n = 560). B: GluA1 density for control (black) and after 1 hour whisker stimulation (red). Stimulation leads to an increase of GluA1 in 30% of neurons52.

In 6, EPSP magnitude was measured for L5 pyramidal neurons in slices from rat visual cortex, averaged over 45–60 responses, and peak amplitude recorded, (Figure 7A). Similar data were used in 49 for slices from a single barrel column in rats (Figure 7B). A 30-fold variation of coupling strength was noted. In 10, EPSPs between CA3 and CA1 in hippocampal slices were recorded, by detecting somatic membrane potential changes in response to presynaptic neuron stimulation (Figure 7C). There are also synaptic weight data on granule cell to Purkinje cell connections11,50,51, which show a similar distribution, but have an order of magnitude weaker connections than cortical connections (Figure 7D). Finally, a different type of evidence was obtained in 52, namely labeling for a subunit of AMPA receptors in layer 2/3 mouse barrel cortex in vivo both before and after whisker stimulation. The AMPA intensity is distributed lognormally over the spines, corresponding to the observations on the strengths of EPSPs. It is noticeable that stimulation leads to increase of on average 200% (two-fold) in about 30% of spines52. Yet as we know, over time the overall distribution of synaptic strengths remains stable. For synaptic weights, just as for intrinsic gains and spike rates, lognormal distributions have been found for both EPSPs and AMPA receptor distribution in a highly consistent manner.

In many cases, the data were only available in the form of histograms. The parameters of the lognormal distribution were then obtained by fitting the data using a Nelder-Mead optimization method. A number of parameters were derived from these fits and reproduced in Table 1Table 3, cf. 3.2.

Table 2. Statistics of intrinsic excitability (gain) in different tissues.

TissuePeak
[Hz/nA]
Mean
µ[Hz/nA]
Median
µ*
Variance
σ2
Width
σ*
Mode
eµσ2
n
Dorsal striatum3484.2469.410.361.8248.328
NAcb shell3654.67106.700.492.0165.424
GP in vivo56.63.429.961.543.466.4146
GP model5374.054.600.401.8836.610000
Cortical461054.96142.590.311.75104.6220

Table 3. Statistics of synaptic weight distributions in different tissues.

TissueMean
µ
Median
µ*
Variance
σ2
Width
σ*
Mode
eµ−σ2
n
Cortex L237-0.990.370.762.390.1748
Cortex L2380.251.281.413.280.3135
Cortex L239-0.940.391.543.460.0861
Cortex L547-0.560.571.473.360.131004
Cortex L549-0.310.730.582.140.4126
Hippocampus10-2.610.070.431.930.0571
Cerebellum GC-PC11-2.700.071.823.850.01104
Cortex in vivo52-1.140.320.902.590.13560

2.2 Simulation model

Given are two neuron populations I, J each with n = 1000 neurons and variable random connectivity C between I and J. C determines the density between I and J. The input population I always has excitatory output onto J. Inhibitory input to J is modeled by a population H with n = 200. The output neuron population J may also have recurrent excitatory connectivity. Figure 9 shows the architecture for the generic neural network used. The model (GNN) was programmed in Matlab, and is available in the public repository github (https://github.com/gscheler/GNN, DOI: https://doi.org/10.5281/zenodo.829949).

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure9.gif

Figure 9. Generic neural network model with neuron populations I, J (excitatory, blue arrows) and H (inhibitory, red arrows).

Lognormal distributions occur for gain G, rates RI, RJ, RH, and weight distributions WIJ. J may have recurrent connectivity.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure10.gif

Figure 10. The width of the rate distribution for J, σRJ*, depends heavily on the gain σG*, but not on the weight distribution σW*.

There is a slight effect of connectivity (upper sheet C=5%, lower sheet C=10%). (μW*=0.7,μG*=30,N=1000,σRI*=2.74,μRI*=4.5.)

The input population I is modeled according to2 for pyramidal cortical neurons with a spike rate distribution of µ* = 4.95 and σ* = 1.98 (Table 1). The goal is to generate a spike rate distribution RJ for J, given a gain distribution G for the target neurons, and the weight distribution WIJ such that RJ is similar to RI.

For each neuron j in J the spike rate rj is calculated by applying its gain gj to the weighted sum of its connected excitatory inputs and its inhibition. Cj is the set of neurons from I that have excitatory connections to neuron j.

rj=gj(iCjwijrirjH)(1)

where the rate ri is taken from the distribution RI, rjH from RH, wij from WIJ, and gj from G. gj is modeled as a factor for a linear gain function. It is possible to use a sigmoidal gain function instead, but this makes no difference for the conclusions from the model (Section 3.3). The output RJ may be used as input to I with a matrix WI,J for tests of the adaptation rules.

For the adaptation of weights WIJ and gains G, we use Hebbian or homeostatic rules, as described in Section 3.3. The system described in this way is sufficient for all the calculations on the shape of distributions used in this paper.

3 Results

3.1 Universality of lognormal distributions

We have documented the distribution of spike rates, gains, and weights for different types of neurons (Figure 1Figure 8). The distribution in all cases follows a lognormal shape. In some cases, we had data on the variability of spike rates and analyzed them for dispersion (CV, FF) under behavioral stimulation. While the fold-change from low spiking neurons to high spiking neurons is high, 5- to 7-fold, the variability for each neuron is comparatively low. It also seems to be adequately described by a percentage change over the whole population. This means that a low spiking neuron never reaches the same rate as a high spiking neuron, even when fully activated.

The similarities across neural systems are striking. For instance, in a midbrain nucleus (inferior colliculus) which is essentially an ’output’ site for auditory and somatosensory cortex, spike rates are high overall45, nonetheless the distribution of mean spike rate, and the variability are comparable to cortical data (Figure 3). Hippocampus, cerebellum and cortex vary in degree of bursting and spike irregularity53, but the rate distribution is constant. The distribution of mean spike rate is also essentially the same under spontaneous and under behavioral conditions.

Lognormal distributions were obtained by fit to the histograms obtained from data (goodness of linear fit, mean 0.92, s. Figure 1Figure 8). The lognormal distribution is a very simple statistical distribution54, almost as simple and as universal in the description of natural processes as a Gaussian distribution (with which it is identical for small σ*). Even though the datasets were occasionally fairly small, and more data could be added to obtain greater precision, the conclusion seems warranted that the underlying natural process is as simple and general as the multiplication of independent variables55, rather than assuming more complex processes which may lead to other exponential-family distributions.

Lognormal rate distributions appear to be an essential property of neural tissues that occur in areas with very different neuron types and connectivity, and different absolute spike frequencies. They are present during spontaneous activity, and under activation of a network, in vivo as well as in vitro. They have a counterpart in a lognormal distribution of intrinsic excitability, and lognormal synaptic connectivity. This type of distribution seems to be an essential component of the functional structure of a mature network, which is not altered by learning, plasticity, or processing of information.

3.2 Data analysis for distributions

A lognormal distribution is characterized by parameters µ* and σ*. µ* = eµ is the median, a scale parameter, which determines the height of the distribution. σ* = eσ is the multiplicative standard deviation, a shape parameter which determines the width of the distribution. For distributions with small σ*(approximately σ* < 1.2 or σ < 0.182) a lognormal distribution is essentially identical to a normal distribution. (The coefficient of variation CV ~ σ* − 1, so that for CV < 0.18, a lognormal equals normal distribution.) We collected data on spike rate, gain and synaptic weight distribution for a number of tissues in different experimental conditions (Table 1Table 3). For the height of the spike rate distribution, there are known differences, e.g. with lower values for cortex (µ* ≈ 4.5) and higher values for Purkinje cell (µ* ≈ 30) and midbrain nuclei (cf. Table 1). In other words, spike rates differ between brain areas such as cortex and cerebellum by a factor of 10.

In contrast to that, the width of spike rate distributions is more similar, with an average at σ* ≈ 2.2, with one outlier. The gain has a smaller σ*, i.e. a more normal, less heavy-tailed distribution than the spike rate. Minus the outlier (3.46), the mean for σ* is only 1.86, considerably lower than the width of the spike rate distribution (Table 2). For weight distributions (Table 3), the width σ* is consistently larger, with an average of almost 3 (2.91). The synaptic strength (µ*) varies over at least one order of magnitude between cortex and cerebellum.

It turns out that σ* values are significantly different for rates, gains and weights, lowest for gains (σ* ≈ 1.8), higher for rates (σ* ≈ 2.2) and highest,(σ* ≈ 3), for weights. The data that we have are not precise enough to draw quantitative conclusions, but no large distinctions are apparent between the tissues (Table 1Table 3). We use a generic neural network to recreate lognormal distributions by adaptation rules and we will also show that distribution widths are structural properties which follow from general network properties.

3.3 Generating lognormal distributions with generic neural networks

Since not only mean spike rates but also both components, intrinsic excitability and synaptic weights, have lognormal distribution, this raises the question of how the functional system that we observe is generated. It is obvious, if the data are accurate, that these are basic parameters of any simulation and need to be reproduced in any model to make it biologically realistic.

We set up a generic neural network model (cf. Section 2.2) to explore the mechanisms of generating and maintaining rate, weight and gain distributions. The model consists of a source neuron group I, a target group J, a population of inhibitory neurons H, which are connected with J, and potentially recurrent excitation in the target group J. The spike rate distribution RI acts through a weight distribution W onto a gain distribution G, where inhibition H is subtracted, and a spike rate output distribution RJ is produced (Figure 9).

In the simplest case, we look at two sets of neurons, the source and the target. The source sends excitatory connections to the target, and exhibits variable weights at outgoing synapses. The input that a target neuron receives is fed through a linear filter G to produce an output rate RJ according to Eq (1). The distribution for RJ depends on G and W as well as on RI. The system is sufficient for calculations on the shape of distributions, as well as the effects of Hebbian and homeostatic plasticity.

We have explored the dependencies between gain, weight and rate distributions in simulations. First, we found that the width of the output spike rate distribution RJ depends heavily on the gain distribution, but only slightly on the input weight distribution (Figure 10). It does depend on the overall connectivity C, where σRJ* is wider for lower connectivity, but not very much (Figure 10). Secondly, the width of the output distribution RJ does not depend on RI or RH either (Figure 11). The most important factor for a spike rate distribution remains the gain σG*.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure11.gif

Figure 11. The width of the rate distribution for J, σRJ*, does not depend on RI or RH.

(μW*=0.7,μG*=30,C=10%,N=1000,μRI*=4.5.)

3.4 Adaptation

We may now ask, where do lognormal spike rate distributions come from? How is the system set up, i.e. what rules of adaptation generate lognormal distributions in weights and gains?

In the case of cortical networks, there are excitatory recurrent interactions that constitute a significant part of total input. In the case of cerebellar or striatal neurons, there are no recurrent excitatory interactions, only inhibitory interneurons and excitatory input. The generation of lognormal distributions must therefore be independent of recurrent excitation. It requires a system where continuous input shapes the weights and gains of a target network J. We start with the system that we described before, with random assignment of weights and gains. We employ adaptivity for weights, and also for gains, by positive Hebbian learning, or by negative homeostatic learning. The output of I is fed into J, and W and G are adaptive. Additionally, J may have excitatory recurrent connectivity, and learning takes place within the network J.

From any given initial spike rate distribution (Gaussian, uniform, lognormal) for I, we calculate W assuming a positive learning (Hebbian) adjustment rule, which is dependent on input and output frequencies. Each individual weight wij is updated by

wij=wij+λwij(riIrjJμ)

We use parameters λ and µ, such that the generated spike rate output RJ is compatible in strength with the input rate RI.

Using Hebbian learning, we generate a weight distribution W that is lognormally distributed, independent of the initial configuration or the distribution of the gains in the system (Figure 12). The lognormal distribution also develops independently of the rate distribution of the inputs, it only develops faster with lognormal rather than normally distributed spike rate input (not shown). It makes no difference whether we use a recurrent system J, or a non-recurrent population J with input from a population I with a spike rate distribution, as long as we use a Hebbian weight adaptation rule. For the shape of the distribution, it also does not matter whether we route the output of J back to I, or whether we use local or no recurrence. To show the effect of the adaptation rule, we also used homeostatic synaptic plasticity to adjust the weights. This means that the weight is adjusted inversely to the spike rate of input and output neurons.

wij=wij+λwij(riIrjJμ)

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure12.gif

Figure 12. Hebbian learning results in lognormal weight distribution independent of gain distribution.

Given is a lognormal input rate σRI*=2,μRI*=4.95. A: Initial weight configurations: Gaussian (grey) or uniform (blue). B: After Hebbian learning using Gaussian gain distribution (grey, blue as before). C: After Hebbian learning using uniform gain distribution (grey, blue as before). D: After Hebbian learning using lognormal gain distribution. (σG*=1.37,μG*=32.7) (grey, blue as before).

In this case, it is very clear that with any input or initial configuration and any gain distribution, only a normal distribution of weights results (Figure 13). Again, a lognormal input spike rate slows the process of adaptation, but the end result is the same, a normal distribution.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure13.gif

Figure 13. Homeostatic learning results in Gaussian weight distributions, independent of gain distribution.

Given is a lognormal input rateσRI*=2,μRI*=4.95. A: Initial Configuration: Gaussian (grey) or uniform (blue). B: Homeostatic weight learning using Gaussian gain distribution (grey, blue as before). C: Homeostatic weight learning using uniform gain distribution (grey, blue as before). D: Homeostatic weight learning using lognormal gain distribution (σG*=1.37,μG*=32.7) (grey, blue as before).

Since gain distributions are also lognormal, we may ask in the same way how they develop and are maintained by plasticity rules. We adapt the linear gain G by either Hebbian or homeostatic learning. Each gain can be adjusted by a Hebbian rule

gj=gj+λgj(rjJμ)

or a homeostatic rule

gj=gjλgj(rjJμ)

with parameters λ and µ.

We start with uniform or normally distributed G in an environment where W is lognormal, normal or uniform, and RI is normal or lognormal. If we adapt only G for any initial configuration, using any distribution for RI, including the lognormal distribution, and a lognormal or normal weight distribution, we arrive at a normal distribution for G with homeostatic learning and a lognormal distribution with Hebbian learning (Figure 14).

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure14.gif

Figure 14. Hebbian or homeostatic gain learning determine lognormal or Gaussian outcome.

Given is a lognormal input rate σRI*=2,μRI*=4.95. A: Initial Configuration: Gaussian (grey) or uniform (blue). B and C: Hebbian learning using lognormal or Gaussian weights, resulting gain distribution is lognormal. D and E: Homeostatic learning using lognormal or Gaussian weights, resulting gain distribution is Gaussian.

Lognormal distributions develop from Hebbian plasticity, and homeostatic plasticity generates only normal distributions. The explanation lies in the nature of random statistical events, which generate normal distributions when the underlying mechanisms are sums of many small events, but lognormal distributions when the underlying mechanisms are multiplicative54. We also wanted to understand the observed widths of the distributions. We hypothesized that the differences for σ* between W, R and G result from the network structure. Accordingly we started a simulation with initial uniform values for G and W and Hebbian update rules using the same learning rate λ for both (Figure 15). We find that gain, rate and weight distributions match the experimental values, and that this is true for any tested constellation. We also found that Hebbian learning alone quickly escalates values, which develop exponentially, and that additional rounds of homeostatic adaptation are required to stabilize the system. Homeostatic learning pushes the system back towards a normal distribution.

e50b6acd-ecf1-4411-ae09-baf5366ee467_figure15.gif

Figure 15. Experimental and generated distribution widths for spike rates, gains and weights.

Grey, experimental measurements (s. Tables); red, generated with 100% Hebbian learning; or blue, 80% Hebbian and 20% homeostatic learning combined. The basic distinction in distribution width for gains, rates, weights is reproduced with Hebbian learning, additional homeostatic learning matches experimental values best.

Our data, in the most general way, allowing for various conditions and architectures, show that Hebbian learning is required both for intrinsic gain and for weights in order to generate the attested lognormal distributions. This is an interesting result, because it shows that we need prominent Hebbian intrinsic learning to explain the gain distributions that we find experimentally. Intrinsic learning is not just homeostatic adaptation, it follows the same rules as synaptic weight learning.

4 Discussion

4.1 Universality of lognormal distributions

Spike rates of neurons seem to be universally distributed according to a lognormal distribution, with many neurons at low spike rates, and a small number at successively higher spike rates (heavy-tail)1. The same distributions are found for synaptic weights6, and intrinsic properties associated with excitability (gain)1. The neurons that we reported on are of very different types, and they are embedded in different kinds of connectivity. Medium spiny neurons and Purkinje cells are GABAergic (inhibitory), while cortical and IC neurons are glutamatergic (excitatory), but this is not reflected in a distinct spike rate distribution. They also fire with very different average spike rates. IC neurons operate at very high frequencies, and Purkinje neurons at much higher frequencies than cortical or striatal projection neurons. But they all have the same spike rate distribution. It has been suggested2 that lognormal spike distributions are a feature of cortical tissue and arise from strong excitatory recurrent connectivity, but this is experimentally not substantiated nor is it theoretically necessary. While cortical pyramidal neurons exist in a heavily excitatory recurrent environment, medium spiny neurons, cerebellar Purkinje cells and IC neurons act mostly in a feed-forward way, i.e. they don’t have significant recurrent excitatory (glutamatergic) connectivity.

Beyond spike rate distribution, we also gathered data on weight and gain distributions. Again the observation of lognormal distributions is ubiquitous. We find synaptic weight distributions for cortex6 and cerebellum that are lognormal, with characteristic width of distributions. For intrinsic properties, striatal projection neurons and cortical neurons46 show responses to constant current and current-to-threshold (gain) distributions, which again appear lognormally distributed, with smaller widths than spike rate distributions.

Our models show that lognormal distributions arise even in a purely input-output environment, and that they are a result of Hebbian learning of weights and gains, quite independent of the overall magnitude of the spike rates.

4.2 Generating lognormal distributions

Mean spike rates, as well as intrinsic excitability and synaptic weights, have lognormal distributions.

It has often been assumed that variability in intrinsic excitability is a source of noise in neural computation56, even though others have argued that intrinsic variability contributes to neural coding57,58 and that intrinsic plasticity follows certain rules19. An excellent overview of the experimentally attested forms of intrinsic plasticity is contained in 35, cf. 5961. Many other detailed observations are contained in 3638,62.

Recently, Mahon and Charpier4 have shown that intrinsic excitability is stable in individual neurons under control conditions, while stimulation protocols (e.g. in barrel cortex of anesthetized rats) change intrinsic excitability by at least 50–100%. However, the conclusions drawn from the experimental research are often contradictory. Intrinsic plasticity is sometimes assumed to act in a negative, homeostatic way, i.e. opposite to synaptic plasticity4, but sometimes in a ’Hebbian’, positive way, i.e. cooperative with synaptic plasticity38,63. There is evidence for (short-term) negative or homeostatic plasticity, which has been previously investigated4.

Our work has now shown that any kind of neural system with linear gains requires positive, Hebbian intrinsic plasticity to produce and maintain a lognormal distribution of gains. We also could show that the observed widths of the distributions, i.e. the differences for σ* between W, R and G, naturally result from the network structure and are built into the system simply by Hebbian adaptation.

Lognormal distributions may arise as stable properties of the system during early development (the set-up of the system), i.e. before actual pattern storage or event memory develops, and they are maintained during processing by a Hebbian type of positive adaptation events. Homeostatic plasticity consists in downregulating gains or weights with increases in firing rates. Purely homeostatic learning results in normal distributions, and erases existing lognormal distributions. By combining homeostatic and Hebbian adaptation we can achieve and maintain stable lognormal distributions.

4.3 Why logarithmic coding schemes

A lognormal distribution means that values are normally distributed on a logarithmic scale. From an engineering perspective, basic Hebbian plasticity for synapses and intrinsic properties is sufficient to generate stable logarithmic distributions. If there is random variation of multiplicative events, as in Hebbian plasticity, a lognormal distribution will be the result54.

This is related to principles of sensory coding, where logarithmic scale signal processing enhances perception of weak signals, while also being able to respond to large signals - effectively increasing the perceptual range compared to linear coding14. In an interconnected network logarithmic coding may turn into a property for the access of representations. Feature clusters, or event traces could be accessed by targeted connections to the top-level neurons, which then activate lower level neurons in their immediate vicinity. By accessing high frequency neurons preferentially, a whole feature area can be reached, and local diffusion will provide any additional computation. Similarly, the results of a local computation can be efficiently distributed by high frequency neurons to other areas. Fast point-to-point communication using only high frequency neurons may be sufficient for fast responses in many cases. Scale-free networks in general support synchronization, which is also a useful feature for rapid information transfer and access64.

Recently, publications65,66 have shown that there is indeed a difference between high-frequency and low-frequency neurons in their connectivity: high-frequency neurons have short delays, strong connections, and directed targets, while low-frequency neurons have long delays, weak connections and diffuse targets.

The lognormal distribution of spike rates has significant implications for neural coding. Logarithmic spike rates are coupled with linear variance for responses to behavioral stimulation. In other words, the greatest part of the coding results already from the frequency rank of the neuron itself, such that high frequency neurons have the largest impact. A fixed mean rate for each neuron allows stable expectation values for network computations.

Logarithmic, hierarchical coding does not need to be sparse. The low frequency neurons may matter the most in terms of input response. With lognormal synaptic weight distributions, if strong synapses are kept stable, they may transmit an input neuron’s mean firing rate to targets and in this way provide stability to the system. All other synapses could be arbitrary. This would allow for continued pattern learning to be implemented by the bulk of low weight synapses, while the framework of neuronal interactions, e.g., the ensemble structure, could be unchanged. Such a division of labor between strong synapses and weaker ones could have many advantages in a complex, modular network.

Experimental data have often shown that sampling of neuronal responses from a large population (105 or more neurons), which become activated at 30% or more, yields accuracy for a stimulus already for small samples (100–200 neurons or 1–2%) (e.g., 67). We may suggest that this happens when we sample from a highly modular structure, and we have been able to replicate the effect with lognormal networks68.

5 Conclusions

In our earlier work1, we found that intrinsic excitability manifested by spike response to current injection and rheobase in vitro for dorsal striatal and nucleus accumbens neurons seems to have the same distribution as the firing rate in cortex under in vivo conditions. Approximately at the same time6, had observed a heavy-tailed distribution of synaptic weights in cortical tissue.

In this article, we have done three things: (a) collected data to show that rate, weight and gain distributions in different brain areas all follow a heavy-tailed, specifically a lognormal distribution; (b) created a generic neural network model to show that these distributions arise from Hebbian learning, and specifically that intrinsic plasticity must be Hebbian as well; and (c) shown that the width of the distributions, as experimentally attested, arise naturally from the network structure and the role of its components, in a very robust way. We have also discussed what the lognormal distribution means for neural coding: a division of labor between fast transmission by high-frequency neurons and low-level computation by low-frequency neurons in a modular structure, and possibly a division of labor between stable components (strong synapses, high-frequency neurons) and more variable components (weak synapses, low-frequency neurons).

6 Software and data availability

The GNN simulation software was programmed in Matlab, and is available in GitHub: https://github.com/gscheler/GNN/tree/v0.1

Archived source code as at time of publication: https://doi.org/10.5281/zenodo.82994949

OSS approved license: Apache 2.0.

All the data required for re-analysis of the study have been referenced throughout the manuscript.

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 25 Jul 2017
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Scheler G. Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; peer review: 2 approved] F1000Research 2017, 6:1222 (https://doi.org/10.12688/f1000research.12130.2)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 2
VERSION 2
PUBLISHED 11 Oct 2017
Revised
Views
12
Cite
Reviewer Report 12 Oct 2017
Rune W Berg, Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark 
Approved
VIEWS 12
The author has appropriately improved the ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Berg RW. Reviewer Report For: Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; peer review: 2 approved]. F1000Research 2017, 6:1222 (https://doi.org/10.5256/f1000research.13939.r26899)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Version 1
VERSION 1
PUBLISHED 25 Jul 2017
Views
20
Cite
Reviewer Report 25 Sep 2017
Tomáš Hromádka, Institute of Neuroimmunology, Slovak Academy of Sciences, Bratislava, Slovakia 
Approved
VIEWS 20
In the manuscript the author documents various instances of lognormal distributions of several neuronal parameters across brain areas. The description then serves as motivation for an intriguing investigation of possible underlying mechanisms responsible for creating such distributions. In particular, combination ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Hromádka T. Reviewer Report For: Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; peer review: 2 approved]. F1000Research 2017, 6:1222 (https://doi.org/10.5256/f1000research.13128.r25241)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
37
Cite
Reviewer Report 29 Aug 2017
Rune W Berg, Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark 
Approved with Reservations
VIEWS 37
In this study the author investigate various distributions of neuronal networks, and suggest that lognormality is ubiquitous. The author constructs a Hebbian model to better understand the mechanisms behind these distributions. The analyses are detailed, well-done and interesting. The manuscript is ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Berg RW. Reviewer Report For: Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; peer review: 2 approved]. F1000Research 2017, 6:1222 (https://doi.org/10.5256/f1000research.13128.r25048)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 25 Jul 2017
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.