Abstract
Nerve cells in the brain generate sequences of action potentials with a complex statistics. Theoretical attempts to understand this statistics were largely limited to the case of a temporally uncorrelated input (Poissonian shot noise) from the neurons in the surrounding network. However, the stimulation from thousands of other neurons has various sorts of temporal structure. Firstly, input spike trains are temporally correlated because their firing rates can carry complex signals and because of cell-intrinsic properties like neural refractoriness, bursting, or adaptation. Secondly, at the connections between neurons, the synapses, usage-dependent changes in the synaptic weight (short-term plasticity) further shape the correlation structure of the effective input to the cell. From the theoretical side, it is poorly understood how these correlated stimuli, so-called colored noise, affect the spike train statistics. In particular, no standard method exists to solve the associated first-passage-time problem for the interspike-interval statistics with an arbitrarily colored noise. Assuming that input fluctuations are weaker than the mean neuronal drive, we derive simple formulas for the essential interspike-interval statistics for a canonical model of a tonically firing neuron subjected to arbitrarily correlated input from the network. We verify our theory by numerical simulations for three paradigmatic situations that lead to input correlations: (i) rate-coded naturalistic stimuli in presynaptic spike trains; (ii) presynaptic refractoriness or bursting; (iii) synaptic short-term plasticity. In all cases, we find severe effects on interval statistics. Our results provide a framework for the interpretation of firing statistics measured in vivo in the brain.
Similar content being viewed by others
References
Alijani, A., & Richardson, M. (2011). Rate response of neurons subject to fast or frozen noise: from stochastic and homogeneous to deterministic and heterogeneous populations. Physical Review E, 84, 011,919–1.
Baddeley, R., Abbott, L.F., Booth, M.C.A., Sengpiel, F., Freeman, T., Wakeman, E.A., & Rolls, E.T. (1997). Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proceedings of the Royal Society of London B, 264, 1775.
Badel, L., Lefort, S., Brette, R., Petersen, C.C., Gerstner, W., & Richardson, M.J. (2008). Dynamic I-V curves are reliable predictors of naturalistic pyramidal-neuron voltage traces. Journal of Neurophysiology, 99(2), 656.
Bair, W., Koch, C., Newsome, W., & Britten, K. (1994). Power spectrum analysis of bursting cells in area MT in the behaving monkey. Journal of Neurophysiology, 14, 2870.
Bauermeister, C., Schwalger, T., Russell, D., Neiman, A., & Lindner, B. (2013). Characteristic effects of stochastic oscillatory forcing on neural firing statistics: theory and application to paddlefish electroreceptor afferents. PLoS Computational Biology, 9(8), e1003,170.
Brenner, N., Agam, O., Bialek, W., & de Ruyter van Steveninck R. (2002). Statistical properties of spike trains: universal and stimulus-dependent aspects. Physical Review E, 66, 031,907.
Brunel, N. (2000). Sparsely connected networks of spiking neurons. Journal of Computational Neuroscience, 8, 183.
Brunel, N., & Sergi, S. (1998). Firing frequency of leaky integrate-and-fire neurons with synaptic currents dynamics. Journal of Theoretical Biology, 195, 87.
Brunel, N., Chance, F.S., Fourcaud, N., & Abbott, L.F. (2001). Effects of synaptic noise and filtering on the frequency response of spiking neurons. Physical Review Letters, 86, 2186.
Bulsara, A., Lowen, S.B., & Rees, C.D. (1994). Cooperative behavior in the periodically modulated Wiener process: noise-induced complexity in a model neuron. Physical Review E, 49, 4989.
Burkitt, A.N. (2006). A review of the integrate-and-fire neuron model: I. homogeneous synaptic input. Biological Cybernetics, 95, 1–19.
Buzsáki, G., & Draguhn, A. (2004). Neural oscillations in cortical networks. Science, 304, 1926.
Câteau, H., & Reyes, A.D. (2006). Relation between single neuron and population spiking statistics and effects on network activity. Physical Review Letters, 96, 058,101.
Chacron, M.J., Longtin, A., St-Hilaire, M., & Maler, L. (2000). Suprathreshold stochastic firing dynamics with memory in P-type electroreceptors. Physical Review Letters, 85, 1576.
Chacron, M.J., Longtin, A., & Maler, L. (2005). Delayed excitatory and inhibitory feedback shape neural information transmission. Physical Review E, 72, 051,917.
Compte, A., Constantinidis, C., Tegner, J., Raghavachari, S., Chafee, M.V., Goldman-Rakic, P.S., & Wang, X.J. (2003). Temporally irregular mnemonic persistent activity in prefrontal neurons of monkeys during a delayed response task. Journal of Neurophysiology, 90(5), 3441.
Cox, D.R., & Lewis, P.A.W. (1966a). The statistical analysis of series of events. London: Chapman and Hall.
Cox, D.R., & Lewis, P.A.W. (1966b). The statistical analysis of series of events. London: Chapman and Hall. chap 4.6.
Cox, D.R., & Miller, H.D. (1965). The theory of stochastic processes: Chapman and Hall.
Destexhe, A., Rudolph, M., & Paré, D. (2003). The high-conductance state of neocortical neurons in vivo. Nature Reviews Neuroscience, 4(9), 739–751.
Dittman, J.S., Kreitzer, A.C., & Regehr, W.G. (2000). Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. Journal of Neuroscience, 20, 1374.
Droste, F., & Lindner, B. (2014). Integrate-and-fire neurons driven by asymmetric dichotomous noise. Biological Cybernetics, 108(6), 825–843.
Droste, F., Schwalger, T., & Lindner, B. (2013). Interplay of two signals in a neuron with heterogeneous synaptic short-term plasticity. Frontiers in Computational Neuroscience, 7, 86.
Dummer, B., Wieland, S., & Lindner, B. (2014). Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity. Front Comp Neurosci, 8, 104.
Ermentrout, G.B., & Terman, D.H. (2010). Mathematical foundations of neuroscience: Springer.
Fisch, K., Schwalger, T., Lindner, B., Herz, A., & Benda, J. (2012). Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron. Journal of Neuroscience, 344(48), 17,332– 17.
Fortune, E.S., & Rose, G.J. (2001). Short-term synaptic plasticity as a temporal filter. Trends in Neurosciences, 24, 381.
Fourcaud, N., & Brunel, N. (2002). Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Computation, 14, 2057.
Fourcaud-Trocmé, N., Hansel, D., van Vreeswijk, C., & Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. Journal of Neuroscience, 640(37), 11,628–11.
Franklin, J., & Bair, W. (1995). The effect of a refractory period on the power spectrum of neuronal discharge. SIAM Journal on Applied Mathematics, 55, 1074.
Gerstein, G.L., & Mandelbrot, B. (1964). Random walk models for the spike activity of a single neuron. Biophysical Journal, 4, 41.
Gerstner, W., & Kistler, W.M. (2002). Spiking neuron models: single neurons, populations, plasticity. Cambridge: Cambridge University Press.
Hänggi, P., & Jung, P. (1995). Colored noise in dynamical systems. Advances in Chemical Physics, 89, 239.
Holden, A.V. (1976). Models of the stochastic activity of neurones. Berlin: Springer.
Kamke, E. (1965). Differentialgleichungen, Lösungsmethoden und Lösungen II: partielle Differentialgleichungen erster Ordnung für eine gesuchte Funktion. Leipzig: Geest & Portig.
van Kampen, N.G. (1992). Stochastic processes in physics and chemistry. North-Holland, Amsterdam.
Koch, C. (1999). Biophysics of computation: information processing in single neurons: Oxford University Press.
Lerchner, A., Ursta, C., Hertz, J., Ahmadi, M., Ruffiot, P., & Enemark, S. (2006). Response variability in balanced cortical networks. Neural Compution, 18(3), 634.
Lindner, B (2004). Interspike interval statistics of neurons driven by colored noise. Physical Review E, 69, 022,901.
Lindner, B. (2006). Superposition of many independent spike trains is generally not a Poisson process. Physical Review E, 73, 022,901.
Lindner, B., Gangloff, D., Longtin, A., & Lewis, J.E. (2009). Broadband coding with dynamic synapses. Journal of Neuroscience, 29(7), 2076–2088.
Liu, Y.H., & Wang, X.J. (2001). Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. Journal of Computational Neuroscience, 10, 25.
London, M., Roth, A., Beeren, L., Häusser, M., & Latham, P.E. (2010). Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature, 466(7302), 123.
Lowen, S.B., & Teich, M.C. (1992). Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales. Journal of the Acoustical Society of America, 92, 803.
Merkel, M., & Lindner, B. (2010). Synaptic filtering of rate-coded information. Physical Review E, 921(4 Pt 1), 041,921–041.
Middleton, J.W., Chacron, M.J., Lindner, B., & Longtin, A. (2003). Firing statistics of a neuron model driven by long-range correlated noise. Physical Review E, 68, 021,920.
Moreno-Bote, R., & Parga, N. (2004). Role of synaptic filtering on the firing response of simple model neurons. Physical Review Letters, 92(2), 028102.
Moreno-Bote, R., & Parga, N. (2006). Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses. Physical Review Letters, 96(2), 028,101. 10.1103/PhysRevLett.96.028101.
Moreno-Bote, R., Beck, J., Kanitscheider, I., Pitkow, X., Latham, P., & Pouget, A. (2014). Information-limiting correlations. Nature Neuroscience, 17(10), 1410.
Nawrot, M.P., Boucsein, C., Rodriguez-Molina, V., Aertsen, A., Grun, S., & Rotter, S. (2007). Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro. Neurocomp, 70, 1717.
Peterson, A.J., Irvine, D.R.F., & Heil, P. (2014). A model of synaptic vesicle-pool depletion and replenishment can account for the interspike interval distributions and nonrenewal properties of spontaneous spike trains of auditory-nerve fibers. Journal of Neuroscience, 34(45), 15,097.
Pozzorini, C., Naud, R., Mensi, S., & Gerstner, W. (2013). Temporal whitening by power-law adaptation in neocortical neurons. Nature Neuroscience, 16(7), 942–948.
Richardson, M.J.E. (2008). Spike-train spectra and network response functions for non-linear integrate-and-fire neurons. Biological Cybernetics, 99(4–5), 381.
Risken, H. (1984). The Fokker-Planck equation. Berlin: Springer.
Rosenbaum, R., Rubin, J., & Doiron, B. (2012). Short term synaptic depression imposes a frequency dependent filter on synaptic information transfer. PLoS Computational Biology, 8(6).
Salinas, E., & Sejnowski, T.J. (2002). Integrate-and-fire neurons driven by correlated stochastic input. Neural Compution, 14, 2111.
Schwalger, T., & Lindner, B. (2013). Patterns of interval correlations in neural oscillators with adaptation. Frontiers in Computational Neuroscience, 7(164).
Schwalger, T., & Schimansky-Geier, L. (2008). Interspike interval statistics of a leaky integrate-and-fire neuron driven by Gaussian noise with large correlation times. Physical Review E, 77, 031,914–9.
Schwalger, T., Fisch, K., Benda, J., & Lindner, B. (2010). How noisy adaptation of neurons shapes interspike interval histograms and correlations. PLoS Computational Biology, 6(12), e1001,026. doi:10.1371/journal.pcbi.1001026.
Schwalger, T., Miklody, D., & Lindner, B. (2013). When the leak is weak – how the first-passage statistics of a biased random walk can approximate the ISI statistics of an adapting neuron. European Physical Journal Spec Topics, 222(10), 2655.
Sobie, C., Babul, A., & de Sousa R. (2011). Neuron dynamics in the presence of 1/f noise. Physical Review E, 83(5), 051,912.
Stratonovich, R.L. (1967). Topics in the theory of random noise, vol 1. New York: Gordon and Breach.
Wang, X.J. (1998). Calcium coding and adaptive temporal computation in cortical pyramidal neurons. Journal of Neurophysiology, 79(3), 1549–1566.
Wang, X.J., Liu, Y., Sanchez-Vives, M.V., & McCormick, D.A. (2003). Adaptation and temporal decorrelation by single neurons in the primary visual cortex. Journal of Neurophysiology, 89(6), 3279–3293.
Acknowledgments
Research was supported by the European Research Council (Grant Agreement no. 268689, MultiRules), the BMBF (FKZ: 01GQ1001A), and the research training group GRK1589/1.
Conflict of interests
The authors declare that they have no conflict of interest.
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: Brent Doiron
Appendix A
Appendix A
1.1 A.1 Gaussian approximation of the synaptic input current
We would like to approximate the synaptic input current
by a Gaussian process with the same mean and correlation function as I syn(t). To this end, let us rewrite Eq. (21) in terms of the delta-spike trains \(x_{k}(t)={\sum }_{t_{i}^{(k)}<t}\delta \left (t-t_{i}^{(k)}\right )\), k=1,…,N (where \(t_{i}^{(k)}\) denotes the i-th spike time at the k-th synapse):
In the case of stationary input, mean and correlation function are given by
and
Here, ν k =〈x k (t)〉 is the stationary firing rate of the k-th input spike train and C k l (τ) is the correlation function between spike trains defined by
It is convenient to formulate Eq. (24) in the Fourier domain. Defining the power spectrum as the Fourier transform of the correlation function
we find
Here and in the following, \(\tilde {~}\) and ∗ denote Fourier transform and complex conjugation, respectively. Using these approximations, we can write the current balance Eq. (1) as
where
and ζ(t) being a zero mean Gaussian process with power spectrum
If S ζ (f) is not constant, ζ(t) will be referred to as colored noise (in contrast to white noise, which has a flat power spectrum).
1.2 A.2 Derivation of the interspike interval statistics
Our approach assumes that the colored noise can be represented as ζ(t)=b⋅Y(t), where b is a constant d-dimensional vector and Y(t) is a d-dimensional Ornstein-Uhlenbeck. Specifically, the neuron model can be written as
with the additional fire-and-reset rule that V is reset to V=0 whenever V reaches the threshold V T. A and B are drift and diffusion matrices, respectively. The eigenvalues of A must have negative real part and ξ(t) is a vector of Gaussian white noise processes that obey \(\langle \xi _{i}(t)\xi _{j}(t^{\prime })\rangle =\delta _{i,j}\delta (t-t^{\prime })\). Furthermore, we assume that the elements of b have the same physical dimensions as μ and that Y has non-dimensional units.
The variance of ζ(t) is given by σ 2=b T Σ b, where Σ is the covariance matrix of the OUP with elements
The covariance matrix is related to the drift matrix A and the diffusion matrix
by the Lyapunov equation (see e.g. Risken 1984; van Kampen 1992)
This system of linear equations can be used to solve for the elements of the covariance matrix Σ i j .
1.2.1 Non-dimensional model
It is useful to non-dimensionalize the model Eq. 31a by measuring time in units of the mean ISI V T/μ and voltage in units V T. This corresponds to the introduction of dimensionless variables
and dimensionless parameters
This yields the non-dimensionalized Langevin equation
The non-dimensional firing threshold is \(\hat {V}_{\mathrm {T}}=1\). To make the small parameter in our perturbation calculation explicit, we have introduced the dimensionless parameter 𝜖=σ/μ≪1. In the following, the non-dimensional model Eq. 37a will be considered. For notational simplicity, we will omit in the following the hats on top of the non-dimensional quantities.
The stationary auto-correlation function of the noise η(t)=b⋅Y(t) is for τ≥0 given by (see e.g. (Risken 1984))
where e τA is the matrix exponential defined by its series representation \(e^{{\tau }\mathrm {A}}={\sum }_{k=0}^{\infty }\frac {1}{k!}(\tau \mathrm {A})^{k}\). Note that by our rescaling of b, Eq. (36), the variance of η(t) is C in(0)=〈η 2〉=1. Furthermore, taking the Fourier transform of the correlation function yields the input power spectrum
where \(\mathbb {I}_{d}\) is the d×d identity matrix.
1.2.2 Fokker-Planck equation
The stochastic dynamics 37a with fire-and-reset rule can be reformulated in terms of probability currents and densities. The probability current that describes the drift of V(t) is given by
where p(v,y,t)=〈δ(v−V(t))δ(y−Y(t))〉 is the probability density of V(t) and Y(t) at time t. The d-dimensional OUP corresponds to d probability currents
Here, \(i=1,\dotsc ,d\) and the non-dimensionalized diffusion matrix is defined by \(\mathrm {D}=\frac {1}{2}\hat {\mathrm {B}}^{\mathrm {T}}\hat {\mathrm {B}}\). Because trajectories cannot leave the system, the probability is conserved, which can be expressed by the continuity equation
The right hand side represents the reinjection and outflux of probability at the reset v=0 (source) and the threshold v=1 (sink), respectively, corresponding to the reset of trajectories. With the definition of the currents, Eqs. (40) and (41), the last equation can be written as a partial differential equation for p(v,y,t). Because the source and sink terms are sharply localized, we can omit these terms in the regions v<0, 0<v<1 and v>0 and we arrive at the Fokker-Planck equation (FPE), Eq. (3) with non-dimensionalized parameters μ=1 and σ=𝜖. The source and sink terms can be realized by imposing appropriate boundary conditions at v=0 and v=1. The sink term at v=1 ensures that no probability can be found beyond the threshold and thus there is no backflux of probability from above the threshold, i.e. J v (1,y,t)=0 for 1+𝜖 b⋅y<0. With Eq. (40) this implies the condition (cf. (Brunel and Sergi 1998))
Furthermore, integration of Eq. (42) from v=−ε to v=ε (and letting \(\varepsilon \rightarrow 0\)), reveals that J v (v,y,t) suffers a jump at v=0 by an amount J v (1,y,t). With Eq. (40) this implies the condition
where v=0− and v=0+ denote the left- and right-sided limits. Finally, probability should not leak across infinite boundaries and the probability density must be normalized:
In addition to the boundary conditions, the FPE must be supplied with an initial condition. Because we aim at the statistics of the interspike intervals, it is advantageous to consider an ensemble of neurons that have just fired a spike at time t=0. This corresponds to preparing the system in a state, in which the membrane potential has been reset to zero, i.e. V(0)=0, and the OUP is distributed according to the distribution of Y sampled at the moments of firing. If we denote this distribution by p spike(y), the initial condition for the FPE must be of the form p(v,y,0)=δ(v)p spike(y).
1.2.3 Distribution upon firing
In order to determine p spike(y), we first seek the stationary solution p s (v,y) of the FPE. Setting ∂ p/∂ t=0, we obtain the stationary FPE
which is still difficult to solve analytically due to the discontinuous boundary condition (43) at v=1. However, for weak noise, 𝜖≪1, the velocity 1+𝜖 b⋅y, is practically always positive, so that the boundary condition (43) can be safely ignored. A positive velocity also precludes trajectories from entering the region v<0. As a consequence, the system is restricted to the region 0≤v≤1 and has the periodic boundary condition
A solution of Eq. (47) is given by the stationary distribution of the OUP, i.e.
which clearly fulfills the boundary conditions 48a.
The distribution upon firing p spike(y) is proportional to the stationary probability current \(J_{v}^{(s)}(1,\mathbf {y})=(1+\epsilon \mathbf {b}\cdot \mathbf {y})p_{s}(1,\mathbf {y})\) across the threshold (Lindner 2004). In fact, the probability per unit time that a trajectory crosses the threshold through the surface element d S(y) at the point y on the hyperplane defined by v=1 is given by \(J_{v}^{(s)}(1,\mathbf {y})\mathrm {d} y_{1}\dotsb \mathrm {d} y_{d}\). Thus, the probability density upon firing must be proportional to \(J_{v}^{(s)}(1,\mathbf {y})\). The factor of proportionality can be found from the normalization condition, which yields p spike(y)=(1+𝜖 b⋅y)p s (1,y) and hence the initial distribution
1.2.4 Interspike interval density
An interspike interval is equivalent to the first-passage-time (FPT) with respect to the threshold V T=1, i.e. the time a system that has been prepared according to a spike at t=0 needs to reach the threshold for the first time. A standard approach to obtain the probability density of the FPT (i.e. the ISI density) is to consider an ensemble in which trajectories are not reset upon threshold crossing but are taken out of the system. As a consequence, the probability to find a trajectory with v<1,
is the probability that a trajectory survives until time t. Here, p(v,y,t) is the time-dependent solution of the FPE, for which the reset condition (44) has been dropped now. The FPT density is given by P(t)=−dG/dt, or
Using the continuity Eq. (42) and the natural boundary conditions (45) yields
where
is the FPT density with respect to a general threshold V T=v. Eq. (53) simply expresses that the FPT is equal to the total probability current through the threshold at time t. The ISI density can be rewritten as
where \(\tilde P(t,\ell )=\int \mathrm {d}v\,e^{i\ell v}\mathcal {P}(t,v)\) is the Fourier transform with respect to v. Using Eq. (54), we thus have
This function can be further related to the characteristic function
where k is a d-dimensional vector with elements k i , \(i=1,\dotsc ,d\). Indeed, comparing (56) with (57), we find the relation
where ∇ k q denotes the gradient of q with respect to k, i.e ∇ k q is a vector with elements ∂ i q≡∂ q/∂ k i . Thus, if the characteristic function q(ℓ,k,t) is known, one can derive the ISI density via (55) and (58). In order to compute q(ℓ,k,t), it is not necessary to calculate the probability density p(v,y,t) explicitly from the FPE, which does not permit a closed form solution for the initial conditions (50). However, applying the Fourier transform to the FPE directly yields a first-order differential equation for q(ℓ,k,t):
with initial condition
This equation can be solved by the method of characteristics (see e.g. (Kamke 1965)). The characteristic equations are
The solution of the first equation is
which can be solved for the initial vector c:
The solution of Eq. (62) is given by
where the Lyapunov equation (34) has been used. The prefactor Q 0(c) can be determined from the initial condition (60) as \( Q_{0}=1+i\epsilon \mathbf {b}^{\mathrm {T}}\varSigma \mathbf {c}\). The integration in Eq. (65) has to be performed using the expression (63), followed by a re-substitution of the vector c=c(t,k) according to Eq. (64). As a result we obtain the characteristic function
where
Using Eq. (58) leads to
and
which constitutes the final result for the ISI density. Let us provide two alternative expressions for h(t) and g(t). First, if A is invertible we can write these functions in terms of the drift matrix:
Second, using the power spectrum of the noise
we find
where \(\text {sinc}(x)=\sin (\pi x)/(\pi x)\).
1.2.5 Cumulants of the n-th order intervals
Apart from the ISI density the ISI statistics can be characterized by its cumulants (semi-invariants). We will calculate these cumulants also for the nth-order intervals because they will be used to find an expression for the SCC and the calculation does not pose any extra difficulties. In fact, as argued in the main text, the time t n of the n-th threshold crossing, i.e. the nth-order interval, is equal to the first-passage time of the free process with respect to the nth-fold threshold V T=n (given the same noise realization). The probability density of the nth-order interval is thus given by \(\mathcal {P}(t,n)\) (cf. Eq. (54)). The k-th cumulants of the nth-order intervals is given by (van Kampen 1992)
where
is the Laplace transform of the n-th-order interval density, which plays the role of a moment generating function. Introducing the Laplace-Fourier transform of the probability density
the moment generating function can be rewritten as
The function φ satisfies the transformed FPE
The mixed derivative ∂ v ∇ k φ makes an exact solution infeasible. However, using the perturbation expansion
yields a hierarchy of first-order partial differential equations for the coefficients φ (k)(v,k,s). Specifically, we obtain
where the the inhomogeneities I m are given by
If we insert the ansatz (76) into the formula (74), we obtain a perturbation series for the moment generating function:
where \(\bar {P}_{n}^{(0)}(s)=\varphi ^{(0)}(n,0,s)\) and
for m≥1. Equation (77) can be again solved by the method of characteristics. To this end, the solution surface \(\left (v,\mathbf {k},\varphi ^{(m)}(v,\mathbf {k},s)\right )\) is parametrized by a parameter τ and d initial constants c i , \(i=1,\dotsc ,d\). The total derivative of φ (m) with respect to τ is then \(\mathrm {d}\varphi ^{(m)}/\mathrm {d}\tau =\frac {\mathrm {d}v}{\mathrm {d}\tau }\partial _{v}\varphi ^{(m)}+ \frac {\mathrm {d}\mathbf {k}^{\mathrm {T}}}{\mathrm {d}\tau }\nabla _{\mathbf {k}}\varphi ^{(m)}\). Comparison with Eq. (77) yields the characteristic equations
The integration of the first two equations results in
To solve the third equation, we make the ansatz
Using this ansatz in Eq. (85) yields
where \(\hat {I}_{n}(\tau ,\mathbf {c})=I_{n}\exp \left (s\tau +\frac {1}{2}\mathbf {k}^{\mathrm {T}}\varSigma \mathbf {k}\right )\). In terms of the lower-order functions ψ n−1, the inhomogeneous parts can be written as
In order to regard these expressions as functions of τ and c, the vector k has to be understood as \(\mathbf {k}(\tau ,\mathbf {c})=e^{-\tau \text {A}^{\text {T}}}\mathbf {c}\). It turns out that in Eq. (88), the prefactor in front of ψ (m) vanishes because in light of Eqs. (34) and (84)
Thus, ψ (m) can be easily integrated:
For the evaluation of this formula, the derivatives of the lower-order functions ψ (m−1)(τ,c) with respect to v and k are needed. To this end, it is useful to apply the chain rule: for a given scalar function f(τ,c) we can write
and
In Eq. (94), we made use of ∇ k τ=0 and \(\nabla _{\mathbf {k}}\mathbf {c}=\left (e^{\tau \text {A}^{\text {T}}}\right )^{\mathrm {T}}=e^{\tau A}\), where ∇ k c is a matrix with elements (∇ k c) i j =∂ c j /∂ k i . In Eq. (95), the relation \(\partial _{v}\mathbf {c}=\partial _{v}e^{v\text {A}^{\text {T}}}\mathbf {k}={A}^{\mathrm {T}}\mathbf {c}\) was employed.
For m≥1, the perturbation coefficient \(\bar {P}_{n}^{(m)}(s)\) can be expressed in terms of ψ (m−1)(τ,c) by using Eqs. (82), (93) and (95):
For m=0, we obtain from Eq. (93)
Equation (87) then yields the leading-order of the moment-generating function
The derivatives of ψ (0) simply read
Because the gradient in Eq. (96) equals zero for m=1 (Eq. (99)), the first-order contribution to \(\bar {P}_{n}(s)\) vanishes:
From Eqs. (90) and (93) we obtain
Using Eq. (94), the gradient of ψ (1) reads
Furthermore, the derivative with respect to v is
and the mixed derivative takes the form
For m=2, we find the first non-vanishing correction of the moment-generating function due to noise:
Here, C in(t) is the noise correlation function as defined in Eq. (38). Furthermore, we obtain
and the gradient
The derivative with respect to v is more involved: using Eq. (95) first leads to
Using the identity \(e^{-\tau \text {A}^{\text {T}}}{A}^{\mathrm {T}}=-\frac {d}{d\tau }e^{\tau \text {A}^{\text {T}}}\) and integrating by parts simplifies the expression to
Taking the derivative of the last equation with respect to k results in
where \(\mathbb {I}_{d}\) denotes the d×d identity matrix.
Because the gradient ∇ k ψ (2) vanishes for c=0, we find
Using the derivatives in Eqs. (106), (108) and (109) we obtain the third-order perturbation solution
Differentiating this expression with respect to k, setting c=0, and taking the scalar product with the vector b leads to
Here, \(C_{\text {in}}^{\prime }(\tau )\) denotes the derivative of the correlation function with respect to τ.
Finally, using Eqs. (96) and (112) yields the second non-vanishing correction:
To summarize, the moment-generating function reads up to fourth order in 𝜖:
This formula satisfies the general property \(\bar {P}_{n}(0)=1\) of a moment-generation function, which expresses the normalization \({\int }_{0}^{\infty }\mathrm {d}t\,P_{n}(t)=1\) of the nth-order interval density.
Knowing the moment-generating function, Eq. (114), we can compute the cumulants of the nth-order intervals by means of Eq. (71) up to fourth-order in 𝜖. As expected, the first cumulant, or mean nth-order interval, is given by
which is consistent with the fact that \(\langle t_{n}\rangle =\langle T_{1}\rangle +\dotsb +\langle T_{n}\rangle =n\langle T_{i}\rangle =n\) (recall that 〈T i 〉=1 in the non-dimensionalized system).
The second cumulant is equal to the variance \(\langle {\Delta } {t_{n}^{2}}\rangle \) of the nth-order intervals, where Δt n =t n −〈T i 〉. From the formula for the cumulants, Eq. (71), it can be seen that only the coefficients of the quadratic terms s 2 in Eqs. (113) and (114) enter the expression for the second cumulant (\(\bar {P}_{n}(s)\) has no linear terms). Thus, we find
The third cumulant only involves the cubic terms in Eq. (113), hence
1.2.6 Coefficient of variation and serial correlation coefficient
The coefficient of variation is given by the ratio of standard deviation and mean of the 1st-order interval (i.e. ISI). If we take into account that we measure time in units of the mean ISI, we find
where κ 2,1 is given by Eq. (116) with n=1. Furthermore, we can compute the SCC by using formula (116) for arbitrary n:
If we further restrict ourselves to the leading order, we can derive a much simplified expression for the SCC in terms of the noise correlation function. In fact, writing \(\langle {\Delta } {t_{n}^{2}}\rangle =2\epsilon ^{2}{{\int }_{0}^{n}}\mathrm {d}\tau _{2}\,{\int }_{0}^{\tau _{2}}\mathrm {d}\tau _{1}\,C_{\text {in}}{(\tau _{1})}+\mathcal {O}(\epsilon ^{4})\) and accounting for the symmetry of C in(τ), we obtain from Eq. (119)
In terms of the noise power spectrum, Eq. (70), this expression can be rewritten as
To elucidate the general structure of serial correlations, it is instructive to express the SCC in terms of the drift matrix A and covariance matrix Σ of the OUP. In leading order of 𝜖, the variance of the nth-order intervals reads
where we assume that the drift matrix A is invertible. Setting n=1, we obtain the squared coefficient of variation:
Using Eq. (119), the leading order of the SCC can now be written as
Apparently, the SCC has the simple exponential structure \(\rho _{n}=\mathbf {c}_{1}^{\mathrm {T}} e^{(n-1)\text {A}^{\text {T}}}\mathbf {c}_{2}\) with constant vectors c 1 and c 2. That means that the SCC is a linear combination of the entries in the matrix exponential function \(e^{(n-1)\text {A}^{\text {T}}}\). In fact, let Q be the constant matrix made up by the (generalized) eigenvectors of AT. Then, AT can be diagonalized (or, or more generally, transformed into its Jordan normal form) through the representation AT=QJQ−1. Thus, the SCC can be rewritten as
where the vectors d 1=Qc 1 and \(\mathbf {d}_{2}=\mathrm {Q}^{-1}\mathbf {c}_{2}\) are constant. In the special case, where AT is diagonalizable (i.e. AT possesses d linearly-independent eigenvectors corresponding to the eigenvalues λ i , \(i=1,\dotsc ,d\)), the matrix exponential e (n−1)J is diagonal with the diagonal elements \(e^{\lambda _{i}(n-1)}\). Thus, the SCC is simply a superposition of the exponentials \(e^{\lambda _{i}(n-1)}\). Real eigenvalues contribute to exponential decays with “rate” 1/λ i . On the other hand, complex eigenvalues lead to damped oscillations, which, however, might appear rather irregular as \(e^{\lambda _{i}(n-1)}\) is sampled at discrete integer values (Bauermeister et al. 2013). In any case, the SCC decays to zero for \(n\rightarrow \infty \) because, by assumption, the real parts of the eigenvalues of the drift matrix are negative.
1.2.7 Cumulative correlations and Fano factor
The exponential structure of the SCC, Eq. (123), also allows us to calculate the cumulative effect of serial correlation as expressed by the sum of all SCCs. Applying the formula for the geometric series, \({\sum }_{n=1}^{\infty } e^{(n-1)\text {A}^{\text {T}}}=\left (\mathbb {I}_{d}-e^{\text {A}^{\text {T}}}\right )^{-1}\), we find
This sum also determines the long-term variability of the spike train. Indeed, the fundamental relation (Cox and Lewis 1966a)
relates the Fano factor F(t) of the spike count or the power spectrum S out(f) of the output spike train to the second order ISI statistics as expressed by the CV and the sum of SCCs. The Fano factor is defined as the variance to mean ratio of the spike count N(t) in the time window (0,t), i.e. F(t)=〈ΔN(t)2〉/〈N(t)〉 (note that, in this definition, the origin of the time window is arbitrary; it does not have to coincide with a spike time). The power spectrum of the output spike train \(x_{\text {out}}(t)={\sum }_{i}\delta (t-t_{i})\) is defined as the Fourier transform of the spike train auto-correlation function, i.e. \(S_{\text {out}}(f)=\int \mathrm {d}\tau \,e^{2\pi if\tau }\langle x_{\text {out}}(t)x_{\text {out}}(t+\tau )\rangle \). Using the expression for the squared CV, Eq. (122), and the summed SCC, Eq. (125), the long-time limit of the Fano factor is
In order to obtain an approximation for the Fano factor for arbitrary length of the time window t, we follow (Middleton et al. 2003): If t is large and 𝜖≪1, the spike count N(t) can be approximated by the free membrane potential V(t) with V(0)∈[0,1). Indeed, for weak noise we can assume \(\dot V(t)>0\), so that the n-th spike time t n is equivalent to the crossing of the n-fold threshold, i.e. V(t n )=n. This implies that the deviation δ V(t)=V(t)−N(t) is bounded (in fact, 0≤δ V(t)<1) and hence, \(N(t)=V(t)+\mathcal {O}(1)\). This relation still holds if we assume the specific initial condition V(0)=0. The trajectory of V(t) can then be obtained by integrating Eq. (37a):
Thus, the mean and variance are
where h(t) has been defined in Eq. (66). For large times, the Fano factor is hence given by
In the exact limit \(t\rightarrow \infty \) we re-obtain the leading-order term of Eq. (127), which shows the consistency of the two approaches.
For small time windows, the spike train can be regarded as periodic given the weakness of the noise. This argument holds irrespective of the specific correlation function of the noise, which allows us to apply the same approximation as in (Middleton et al. 2003):
where {t}=t−⌊t⌋ is the difference between t and the largest integer ⌊t⌋ not exceeding t (sawtooth function). Following (Middleton et al. 2003), a uniform approximation on the whole range of t can be obtained by adding the small and large time approximations of the Fano factor:
1.3 A.3 Model of the synaptic input current
To test our theory (Figs. 3–9), we assume that the pool of presynaptic neurons can be split up into \(N_{\mathcal {E}}\) excitatory and \(N_{\mathcal {I}}=N-N_{\mathcal {E}}\) inhibitory neurons. Furthermore, the postsynaptic synaptic currents induced by each spike are modeled by the exponential impulse response functions \(j_{\mathcal {E}}(t)=J_{\mathcal {E}} e^{-t/\tau _{\mathcal {E}}}\theta (t)\) and \(j_{\mathcal {I}}(t)=J_{\mathcal {I}} e^{-t/\tau _{\mathcal {I}}}\theta (t)\) with synaptic time constants \(\tau _{\mathcal {E}} \) and \(\tau _{\mathcal {I}}\) (synaptic filtering). This corresponds to the simple first order kinetics
where \(x_{\mathcal {E}/\mathcal {I},k}(t)={\sum }_{j}\delta \left (t-t_{j}^{(\mathcal {E}/\mathcal {I},k)}\right )\) is the spike train arriving at the k-th excitatory/inhibitory synapse. \(J_{\mathcal {E}}\) and \(J_{\mathcal {I}}\) denote the synaptic efficacies. The mean synaptic current is thus \( \langle I_{\text {syn}}(t)\rangle =N_{\mathcal {E}}J_{\mathcal {E}}\tau _{\mathcal {E}}\nu _{\mathcal {E}}+N_{\mathcal {I}}J_{\mathcal {I}}\tau _{\mathcal {I}}\nu _{\mathcal {I}}\), where we assumed that excitatory and inhibitory neurons fire with rate \(\nu _{\mathcal {E}}\) and \(\nu _{\mathcal {I}}\), respectively.
In frequency domain, the synaptic filter becomes
for \(i=\mathcal {E},\mathcal {I}\). In the special case, where the single presynaptic spike trains are independent and have power spectra \(S_{\mathcal {E}}(f)\) and \(S_{\mathcal {I}}(f)\), respectively, we find the power spectrum of the synaptic current from Eq. (27):
1.4 A.4 Signal encoded in a common rate modulation of Poissonian inputs
To model the encoding of a continuous signal s(t), we assume that s(t) represents the rate modulation of an inhomogeneous Poisson process. Specifically, we assume that the k-th presynaptic spike train is a Poisson process with rate
Here, s(t) is common to all inputs and is modeled as a mean-zero Gaussian process with power spectrum S s (f). For the correlation matrix, Eq. (25), we find
The power spectrum S I (f) of the total input current is then given by Eq. (27), and the normalized spectrum is \(S_{\text {in}}(f)=S_{I}(f)/\int \mathrm {d}f\,S_{I}(f)\). Specifically, we find for homogeneous excitatory and inhibitory neurons, where only the excitatory population carries a signal (\(\varepsilon _{\mathcal {E}}=\varepsilon >0\), \(\varepsilon _{\mathcal {I}}=0\)):
A signal spectrum with a powerlaw decay can be modeled by Eq. (15) (cf. (Sobie et al. 2011)), where \(\mathcal {N}=\frac {1}{2}(1-\gamma )/\left (f_{2}^{1-\gamma }-\gamma f_{1}^{1-\gamma }\right )\) is a normalization constant ensuring unit variance, i.e. \(\int \mathrm {d}f\,S_{s}(f)=1\). The steepness of the decay is quantified by the exponent γ. For γ=0, we recover a flat spectrum, i.e. white noise with a cut-off frequency f 2. For γ=1, we obtain 1/f-noise (flicker noise), which possesses strongly increased power at low frequencies. For the sake of the following calculations, we restrict γ to the range [0,1). Furthermore, we assume the lower cut-off to be small, f 1≪1. Note that in the limit \(f_{1}\rightarrow 0\), we have to require γ<1 in order to have a signal with a finite-variance. Taking this limit allows us to analytically derive the statistical properties of the spike train on long times t in the range \(1\ll t\ll f_{1}^{-1}\). Nevertheless, in simulations we keep a small lower cut-off f 1>0 in order to resolve the low-frequency components.
We are interested in the asymptotic behaviors
for \(1\ll t\ll f_{1}^{-1}\), and \(1\ll n\ll f_{1}^{-1}\), respectively. We will exploit the property that sinc2(x) is negligibly small for x>1. Thus, in Eq. (136) only frequencies f<1 contribute to the integral. Similarly, we can exploit the fact that for large n, the term e −2πifn oscillates rapidly for f>1. Thus, only small frequencies f≪1 contribute to the integral in Eq. (137). This allows us to make the following approximations: First, if f 2>1, we can replace the upper cut-off frequency by infinity, i.e. \(f_{2}\rightarrow \infty \), without changing the integrals in Eqs. (136) and (137). Secondly, if we assume a time scale separation between spiking and fast synaptic filtering dynamics, \(\tau _{\mathcal {E}},\tau _{\mathcal {I}}\ll 1\), we can replace the synaptic filter by
because only frequencies f<1 matter in the above integrals. From Eq. (135) then follows that the normalized spectrum has the form
with \(\sigma _{s}=\varepsilon _{\mathcal {E}}J_{\mathcal {E}}\tau _{\mathcal {E}}N_{\mathcal {E}}\nu _{\mathcal {E}}\) and \(S_{0}={\sum }_{k=\mathcal {E},\mathcal {I}}{J_{k}^{2}}{\tau _{k}^{2}}N_{k}\nu _{k}\). Using Eqs. (136) and (138), we find for the Fano factor
If the lower cut-off frequency f 1 is still small enough such that f 1 t≪1, we can set the lower integration boundary to zero. This yields
where \(\mathcal {C}_{1}=2(2\pi )^{\gamma -1}{\Gamma }(-1-\gamma )\sin \left (\frac {\pi \gamma }{2}\right )\).
To get the behavior of the SCC for large lags n, we can again set f 1 to zero. Ignoring prefactors we find
For large n, we can approximate \((n\pm 1)^{1+\gamma }\approx n^{1+\gamma }\pm (1+\gamma )n^{\gamma }+\frac {\gamma (1+\gamma )}{2}n^{\gamma -1}\), hence
1.5 A.5 Inverse Gaussian input
To model non-Poissonian input spike trains with a CV smaller or larger than unity, we assume presynaptic spike trains to be renewal processes with an inverse Gaussian ISI density:
This distribution is parametrized by the rate ν and the coefficient of variation C V. The spike train power spectrum is given by (Stratonovich 1967)
where \(\tilde P(f;\nu ,C_{\mathrm {V}})\) is the Fourier transform of the inverse Gaussian, Eq. (140), given by
(Holden 1976; Bulsara et al. 1994; Chacron et al. 2005). The power spectrum of the synaptic current is then, according to Eq. (27):
1.6 A.6 Synapses with short-term plasticity
Following (Dittman et al. 2000; Merkel and Lindner 2010), we use a deterministic model for the synaptic dynamics. A presynaptic spike at the ℓth synapse causes a postynaptic response with the history-dependent weight A ℓ (t)=F ℓ (t)D ℓ (t), where facilitation is governed by
and where depression dynamics obey
Here, x ℓ (t) is the presynaptic spike train, F 0 is the intrinsic release probability, Δ quantifies the strength of facilitation and τ F (τ D ) is the timescale of facilitation (depression). Above, we have compared the limit cases of pure facilitation and pure depression, which corresponds to setting D ℓ (t)≡1 (pure facilitation) or F ℓ (t)≡F 0 (pure depression).
1.7 A.7 Nonlinear integrate-and-fire models
Using the PRC, Eq. (18), the deviation of the ith ISI from its mean is given by
This allows us to compute the serial correlations:
If C in(t) does not change rapidly, we can approximate t k ≈k〈T〉 in the last equation. Hence
where \(\tilde Z(f)=\frac {1}{\langle T\rangle }{\int }_{0}^{\langle T\rangle }\mathrm {d}t\,Z(t)e^{2\pi i ft}\). From this follow Eqs. (19) and (20).
The sum of all SCCs evaluates to
where \(\tilde Z(0)\) is equal to the average PRC obtained by averaging over the interval [0,〈T〉]. Furthermore, using Eq. (126) we obtain the long-time limit of the Fano factor or, equivalently, the spike train power spectrum in the zero-frequency limit as follows
This equation implies that for any neural oscillator driven by weak colored noise, the spike count diffusion coefficient D eff=S out(0)/2 is proportional to the intensity D η =S(0)/2 of the input noise, irrespective of the nonlinearity.
Rights and permissions
About this article
Cite this article
Schwalger, T., Droste, F. & Lindner, B. Statistical structure of neural spiking under non-Poissonian or other non-white stimulation. J Comput Neurosci 39, 29–51 (2015). https://doi.org/10.1007/s10827-015-0560-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10827-015-0560-x