Skip to main content
Log in

Statistical structure of neural spiking under non-Poissonian or other non-white stimulation

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

Nerve cells in the brain generate sequences of action potentials with a complex statistics. Theoretical attempts to understand this statistics were largely limited to the case of a temporally uncorrelated input (Poissonian shot noise) from the neurons in the surrounding network. However, the stimulation from thousands of other neurons has various sorts of temporal structure. Firstly, input spike trains are temporally correlated because their firing rates can carry complex signals and because of cell-intrinsic properties like neural refractoriness, bursting, or adaptation. Secondly, at the connections between neurons, the synapses, usage-dependent changes in the synaptic weight (short-term plasticity) further shape the correlation structure of the effective input to the cell. From the theoretical side, it is poorly understood how these correlated stimuli, so-called colored noise, affect the spike train statistics. In particular, no standard method exists to solve the associated first-passage-time problem for the interspike-interval statistics with an arbitrarily colored noise. Assuming that input fluctuations are weaker than the mean neuronal drive, we derive simple formulas for the essential interspike-interval statistics for a canonical model of a tonically firing neuron subjected to arbitrarily correlated input from the network. We verify our theory by numerical simulations for three paradigmatic situations that lead to input correlations: (i) rate-coded naturalistic stimuli in presynaptic spike trains; (ii) presynaptic refractoriness or bursting; (iii) synaptic short-term plasticity. In all cases, we find severe effects on interval statistics. Our results provide a framework for the interpretation of firing statistics measured in vivo in the brain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Alijani, A., & Richardson, M. (2011). Rate response of neurons subject to fast or frozen noise: from stochastic and homogeneous to deterministic and heterogeneous populations. Physical Review E, 84, 011,919–1.

    Article  Google Scholar 

  • Baddeley, R., Abbott, L.F., Booth, M.C.A., Sengpiel, F., Freeman, T., Wakeman, E.A., & Rolls, E.T. (1997). Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proceedings of the Royal Society of London B, 264, 1775.

    Article  CAS  Google Scholar 

  • Badel, L., Lefort, S., Brette, R., Petersen, C.C., Gerstner, W., & Richardson, M.J. (2008). Dynamic I-V curves are reliable predictors of naturalistic pyramidal-neuron voltage traces. Journal of Neurophysiology, 99(2), 656.

    Article  PubMed  Google Scholar 

  • Bair, W., Koch, C., Newsome, W., & Britten, K. (1994). Power spectrum analysis of bursting cells in area MT in the behaving monkey. Journal of Neurophysiology, 14, 2870.

    CAS  Google Scholar 

  • Bauermeister, C., Schwalger, T., Russell, D., Neiman, A., & Lindner, B. (2013). Characteristic effects of stochastic oscillatory forcing on neural firing statistics: theory and application to paddlefish electroreceptor afferents. PLoS Computational Biology, 9(8), e1003,170.

    Article  CAS  Google Scholar 

  • Brenner, N., Agam, O., Bialek, W., & de Ruyter van Steveninck R. (2002). Statistical properties of spike trains: universal and stimulus-dependent aspects. Physical Review E, 66, 031,907.

    Article  Google Scholar 

  • Brunel, N. (2000). Sparsely connected networks of spiking neurons. Journal of Computational Neuroscience, 8, 183.

    Article  CAS  PubMed  Google Scholar 

  • Brunel, N., & Sergi, S. (1998). Firing frequency of leaky integrate-and-fire neurons with synaptic currents dynamics. Journal of Theoretical Biology, 195, 87.

    Article  CAS  PubMed  Google Scholar 

  • Brunel, N., Chance, F.S., Fourcaud, N., & Abbott, L.F. (2001). Effects of synaptic noise and filtering on the frequency response of spiking neurons. Physical Review Letters, 86, 2186.

    Article  CAS  PubMed  Google Scholar 

  • Bulsara, A., Lowen, S.B., & Rees, C.D. (1994). Cooperative behavior in the periodically modulated Wiener process: noise-induced complexity in a model neuron. Physical Review E, 49, 4989.

    Article  Google Scholar 

  • Burkitt, A.N. (2006). A review of the integrate-and-fire neuron model: I. homogeneous synaptic input. Biological Cybernetics, 95, 1–19.

    Article  CAS  PubMed  Google Scholar 

  • Buzsáki, G., & Draguhn, A. (2004). Neural oscillations in cortical networks. Science, 304, 1926.

    Article  PubMed  Google Scholar 

  • Câteau, H., & Reyes, A.D. (2006). Relation between single neuron and population spiking statistics and effects on network activity. Physical Review Letters, 96, 058,101.

    Article  Google Scholar 

  • Chacron, M.J., Longtin, A., St-Hilaire, M., & Maler, L. (2000). Suprathreshold stochastic firing dynamics with memory in P-type electroreceptors. Physical Review Letters, 85, 1576.

    Article  CAS  PubMed  Google Scholar 

  • Chacron, M.J., Longtin, A., & Maler, L. (2005). Delayed excitatory and inhibitory feedback shape neural information transmission. Physical Review E, 72, 051,917.

    Article  Google Scholar 

  • Compte, A., Constantinidis, C., Tegner, J., Raghavachari, S., Chafee, M.V., Goldman-Rakic, P.S., & Wang, X.J. (2003). Temporally irregular mnemonic persistent activity in prefrontal neurons of monkeys during a delayed response task. Journal of Neurophysiology, 90(5), 3441.

    Article  PubMed  Google Scholar 

  • Cox, D.R., & Lewis, P.A.W. (1966a). The statistical analysis of series of events. London: Chapman and Hall.

    Book  Google Scholar 

  • Cox, D.R., & Lewis, P.A.W. (1966b). The statistical analysis of series of events. London: Chapman and Hall. chap 4.6.

    Book  Google Scholar 

  • Cox, D.R., & Miller, H.D. (1965). The theory of stochastic processes: Chapman and Hall.

  • Destexhe, A., Rudolph, M., & Paré, D. (2003). The high-conductance state of neocortical neurons in vivo. Nature Reviews Neuroscience, 4(9), 739–751.

    Article  CAS  PubMed  Google Scholar 

  • Dittman, J.S., Kreitzer, A.C., & Regehr, W.G. (2000). Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. Journal of Neuroscience, 20, 1374.

    CAS  PubMed  Google Scholar 

  • Droste, F., & Lindner, B. (2014). Integrate-and-fire neurons driven by asymmetric dichotomous noise. Biological Cybernetics, 108(6), 825–843.

    Article  PubMed  Google Scholar 

  • Droste, F., Schwalger, T., & Lindner, B. (2013). Interplay of two signals in a neuron with heterogeneous synaptic short-term plasticity. Frontiers in Computational Neuroscience, 7, 86.

    Article  PubMed Central  PubMed  Google Scholar 

  • Dummer, B., Wieland, S., & Lindner, B. (2014). Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity. Front Comp Neurosci, 8, 104.

    Google Scholar 

  • Ermentrout, G.B., & Terman, D.H. (2010). Mathematical foundations of neuroscience: Springer.

  • Fisch, K., Schwalger, T., Lindner, B., Herz, A., & Benda, J. (2012). Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron. Journal of Neuroscience, 344(48), 17,332– 17.

  • Fortune, E.S., & Rose, G.J. (2001). Short-term synaptic plasticity as a temporal filter. Trends in Neurosciences, 24, 381.

    Article  CAS  PubMed  Google Scholar 

  • Fourcaud, N., & Brunel, N. (2002). Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Computation, 14, 2057.

    Article  PubMed  Google Scholar 

  • Fourcaud-Trocmé, N., Hansel, D., van Vreeswijk, C., & Brunel, N. (2003). How spike generation mechanisms determine the neuronal response to fluctuating inputs. Journal of Neuroscience, 640(37), 11,628–11.

    Google Scholar 

  • Franklin, J., & Bair, W. (1995). The effect of a refractory period on the power spectrum of neuronal discharge. SIAM Journal on Applied Mathematics, 55, 1074.

    Article  Google Scholar 

  • Gerstein, G.L., & Mandelbrot, B. (1964). Random walk models for the spike activity of a single neuron. Biophysical Journal, 4, 41.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Gerstner, W., & Kistler, W.M. (2002). Spiking neuron models: single neurons, populations, plasticity. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Hänggi, P., & Jung, P. (1995). Colored noise in dynamical systems. Advances in Chemical Physics, 89, 239.

    Google Scholar 

  • Holden, A.V. (1976). Models of the stochastic activity of neurones. Berlin: Springer.

    Book  Google Scholar 

  • Kamke, E. (1965). Differentialgleichungen, Lösungsmethoden und Lösungen II: partielle Differentialgleichungen erster Ordnung für eine gesuchte Funktion. Leipzig: Geest & Portig.

    Google Scholar 

  • van Kampen, N.G. (1992). Stochastic processes in physics and chemistry. North-Holland, Amsterdam.

  • Koch, C. (1999). Biophysics of computation: information processing in single neurons: Oxford University Press.

  • Lerchner, A., Ursta, C., Hertz, J., Ahmadi, M., Ruffiot, P., & Enemark, S. (2006). Response variability in balanced cortical networks. Neural Compution, 18(3), 634.

  • Lindner, B (2004). Interspike interval statistics of neurons driven by colored noise. Physical Review E, 69, 022,901.

  • Lindner, B. (2006). Superposition of many independent spike trains is generally not a Poisson process. Physical Review E, 73, 022,901.

    Article  Google Scholar 

  • Lindner, B., Gangloff, D., Longtin, A., & Lewis, J.E. (2009). Broadband coding with dynamic synapses. Journal of Neuroscience, 29(7), 2076–2088.

    Article  CAS  PubMed  Google Scholar 

  • Liu, Y.H., & Wang, X.J. (2001). Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. Journal of Computational Neuroscience, 10, 25.

    Article  CAS  PubMed  Google Scholar 

  • London, M., Roth, A., Beeren, L., Häusser, M., & Latham, P.E. (2010). Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature, 466(7302), 123.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Lowen, S.B., & Teich, M.C. (1992). Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales. Journal of the Acoustical Society of America, 92, 803.

    Article  CAS  PubMed  Google Scholar 

  • Merkel, M., & Lindner, B. (2010). Synaptic filtering of rate-coded information. Physical Review E, 921(4 Pt 1), 041,921–041.

    Article  Google Scholar 

  • Middleton, J.W., Chacron, M.J., Lindner, B., & Longtin, A. (2003). Firing statistics of a neuron model driven by long-range correlated noise. Physical Review E, 68, 021,920.

    Article  CAS  Google Scholar 

  • Moreno-Bote, R., & Parga, N. (2004). Role of synaptic filtering on the firing response of simple model neurons. Physical Review Letters, 92(2), 028102.

    Article  PubMed  Google Scholar 

  • Moreno-Bote, R., & Parga, N. (2006). Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses. Physical Review Letters, 96(2), 028,101. 10.1103/PhysRevLett.96.028101.

    Article  Google Scholar 

  • Moreno-Bote, R., Beck, J., Kanitscheider, I., Pitkow, X., Latham, P., & Pouget, A. (2014). Information-limiting correlations. Nature Neuroscience, 17(10), 1410.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Nawrot, M.P., Boucsein, C., Rodriguez-Molina, V., Aertsen, A., Grun, S., & Rotter, S. (2007). Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro. Neurocomp, 70, 1717.

    Article  Google Scholar 

  • Peterson, A.J., Irvine, D.R.F., & Heil, P. (2014). A model of synaptic vesicle-pool depletion and replenishment can account for the interspike interval distributions and nonrenewal properties of spontaneous spike trains of auditory-nerve fibers. Journal of Neuroscience, 34(45), 15,097.

    Article  CAS  Google Scholar 

  • Pozzorini, C., Naud, R., Mensi, S., & Gerstner, W. (2013). Temporal whitening by power-law adaptation in neocortical neurons. Nature Neuroscience, 16(7), 942–948.

    Article  CAS  PubMed  Google Scholar 

  • Richardson, M.J.E. (2008). Spike-train spectra and network response functions for non-linear integrate-and-fire neurons. Biological Cybernetics, 99(4–5), 381.

    Article  PubMed  Google Scholar 

  • Risken, H. (1984). The Fokker-Planck equation. Berlin: Springer.

    Book  Google Scholar 

  • Rosenbaum, R., Rubin, J., & Doiron, B. (2012). Short term synaptic depression imposes a frequency dependent filter on synaptic information transfer. PLoS Computational Biology, 8(6).

  • Salinas, E., & Sejnowski, T.J. (2002). Integrate-and-fire neurons driven by correlated stochastic input. Neural Compution, 14, 2111.

    Article  Google Scholar 

  • Schwalger, T., & Lindner, B. (2013). Patterns of interval correlations in neural oscillators with adaptation. Frontiers in Computational Neuroscience, 7(164).

  • Schwalger, T., & Schimansky-Geier, L. (2008). Interspike interval statistics of a leaky integrate-and-fire neuron driven by Gaussian noise with large correlation times. Physical Review E, 77, 031,914–9.

    Article  Google Scholar 

  • Schwalger, T., Fisch, K., Benda, J., & Lindner, B. (2010). How noisy adaptation of neurons shapes interspike interval histograms and correlations. PLoS Computational Biology, 6(12), e1001,026. doi:10.1371/journal.pcbi.1001026.

    Article  Google Scholar 

  • Schwalger, T., Miklody, D., & Lindner, B. (2013). When the leak is weak – how the first-passage statistics of a biased random walk can approximate the ISI statistics of an adapting neuron. European Physical Journal Spec Topics, 222(10), 2655.

    Article  Google Scholar 

  • Sobie, C., Babul, A., & de Sousa R. (2011). Neuron dynamics in the presence of 1/f noise. Physical Review E, 83(5), 051,912.

    Article  Google Scholar 

  • Stratonovich, R.L. (1967). Topics in the theory of random noise, vol 1. New York: Gordon and Breach.

    Google Scholar 

  • Wang, X.J. (1998). Calcium coding and adaptive temporal computation in cortical pyramidal neurons. Journal of Neurophysiology, 79(3), 1549–1566.

    CAS  PubMed  Google Scholar 

  • Wang, X.J., Liu, Y., Sanchez-Vives, M.V., & McCormick, D.A. (2003). Adaptation and temporal decorrelation by single neurons in the primary visual cortex. Journal of Neurophysiology, 89(6), 3279–3293.

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

Research was supported by the European Research Council (Grant Agreement no. 268689, MultiRules), the BMBF (FKZ: 01GQ1001A), and the research training group GRK1589/1.

Conflict of interests

The authors declare that they have no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tilo Schwalger.

Additional information

Action Editor: Brent Doiron

Appendix A

Appendix A

1.1 A.1 Gaussian approximation of the synaptic input current

We would like to approximate the synaptic input current

$$ I_{\text{syn}}(t)=\sum\limits_{k=1}^{N}\sum\limits_{t_{i}^{(k)}<t}j_{k}(t-t_{i}^{(k)}) $$
(21)

by a Gaussian process with the same mean and correlation function as I syn(t). To this end, let us rewrite Eq. (21) in terms of the delta-spike trains \(x_{k}(t)={\sum }_{t_{i}^{(k)}<t}\delta \left (t-t_{i}^{(k)}\right )\), k=1,…,N (where \(t_{i}^{(k)}\) denotes the i-th spike time at the k-th synapse):

$$ I_{\text{syn}}(t)=\sum\limits_{k=1}^{N}\int j_{k}(t-t^{\prime})x_{k}(t^{\prime})\,\mathrm{d}t^{\prime}. $$
(22)

In the case of stationary input, mean and correlation function are given by

$$ \langle I_{\text{syn}}(t)\rangle=\sum\limits_{k=1}^{N}\nu_{k}\int j_{k}(t)\,\mathrm{d}t. $$
(23)

and

$$ C_{I}(\tau)=\sum\limits_{k=1}^{N}\sum\limits_{l=1}^{N}\int\mathrm{d}t_{1}\,C_{kl}(t_{1})\int\mathrm{d}t_{2}\,j_{l}(t_{2} +\tau)j_{k}(t_{1}+t_{2}). $$
(24)

Here, ν k =〈x k (t)〉 is the stationary firing rate of the k-th input spike train and C k l (τ) is the correlation function between spike trains defined by

$$ C_{kl}(\tau)=\langle x_{k}(t) x_{l}(t+\tau)\rangle-\nu_{k}\nu_{l}. $$
(25)

It is convenient to formulate Eq. (24) in the Fourier domain. Defining the power spectrum as the Fourier transform of the correlation function

$$ S_{I}(f)=\int\mathrm{d}\tau\,e^{2\pi if\tau}C_{I}(\tau) $$
(26)

we find

$$ S_{I}(f)=\sum\limits_{k=1}^{N}\sum\limits_{l=1}^{N}\tilde{j}_{k}^{\ast}(f)\tilde j_{l}(f)S_{kl}(f). $$
(27)

Here and in the following, \(\tilde {~}\) and denote Fourier transform and complex conjugation, respectively. Using these approximations, we can write the current balance Eq. (1) as

$$ \dot V=\mu+\zeta(t), $$
(28)

where

$$ \mu=\frac{1}{C_{\text{m}}}\left(I_{0}+\sum\limits_{k=1}^{N}\nu_{k}\tilde{j}_{k}(0)\right) $$
(29)

and ζ(t) being a zero mean Gaussian process with power spectrum

$$ S_{\zeta}(f)=\frac{1}{C_{\mathrm{m}}^{2}}S_{I}(f). $$
(30)

If S ζ (f) is not constant, ζ(t) will be referred to as colored noise (in contrast to white noise, which has a flat power spectrum).

1.2 A.2 Derivation of the interspike interval statistics

Our approach assumes that the colored noise can be represented as ζ(t)=bY(t), where b is a constant d-dimensional vector and Y(t) is a d-dimensional Ornstein-Uhlenbeck. Specifically, the neuron model can be written as

$$ \dot V=\mu+\mathbf{b}\cdot\mathbf{Y}\ $$
(31a)
$$ \dot{\mathbf{Y}}=\mathrm{A}\mathbf{Y}+\mathrm{B}\boldsymbol{\xi}(t) $$
(31b)

with the additional fire-and-reset rule that V is reset to V=0 whenever V reaches the threshold V T. A and B are drift and diffusion matrices, respectively. The eigenvalues of A must have negative real part and ξ(t) is a vector of Gaussian white noise processes that obey \(\langle \xi _{i}(t)\xi _{j}(t^{\prime })\rangle =\delta _{i,j}\delta (t-t^{\prime })\). Furthermore, we assume that the elements of b have the same physical dimensions as μ and that Y has non-dimensional units.

The variance of ζ(t) is given by σ 2=b T Σ b, where Σ is the covariance matrix of the OUP with elements

$$ \varSigma_{ij}=\langle Y_{i}Y_{j}\rangle. $$
(32)

The covariance matrix is related to the drift matrix A and the diffusion matrix

$$ \mathrm{D}=\frac{1}{2}{\mathrm{B}^{\mathrm{T}}}\mathrm{B} $$
(33)

by the Lyapunov equation (see e.g. Risken 1984; van Kampen 1992)

$$ \mathrm{A}\varSigma+\varSigma{\mathrm{A}^{\mathrm{T}}}=-2\mathrm{D}. $$
(34)

This system of linear equations can be used to solve for the elements of the covariance matrix Σ i j .

1.2.1 Non-dimensional model

It is useful to non-dimensionalize the model Eq. 31a by measuring time in units of the mean ISI V T/μ and voltage in units V T. This corresponds to the introduction of dimensionless variables

$$ \hat V=\frac{V}{V_{\mathrm{T}}},\quad \hat t=\frac{\mu}{V_{\mathrm{T}}}t $$
(35)

and dimensionless parameters

$$ \hat{\mathbf{b}}=\frac{{\mathbf{b}}}{\sigma},\qquad\hat{\mathrm{A}}=\frac{V_{\mathrm{T}}}{\mu}{\mathrm{ A}},\qquad \hat{\mathrm{B}}=\sqrt{\frac{V_{\mathrm{T}}}{\mu}}{\mathrm{ B}}. $$
(36)

This yields the non-dimensionalized Langevin equation

$$ \dot{\hat{V}}=1+\epsilon\hat{\mathbf{b}}\cdot\mathbf{Y}, $$
(37a)
$$ \dot{\mathbf{Y}}=\hat{\mathrm{A}}\mathbf{Y}+\hat{\mathrm{B}}\boldsymbol{\xi}(t). $$
(37b)

The non-dimensional firing threshold is \(\hat {V}_{\mathrm {T}}=1\). To make the small parameter in our perturbation calculation explicit, we have introduced the dimensionless parameter 𝜖=σ/μ≪1. In the following, the non-dimensional model Eq. 37a will be considered. For notational simplicity, we will omit in the following the hats on top of the non-dimensional quantities.

The stationary auto-correlation function of the noise η(t)=bY(t) is for τ≥0 given by (see e.g. (Risken 1984))

$$ C_{\text{in}}{(t)}\equiv\langle\eta(t)\eta(t+\tau)\rangle=\mathbf{b}^{\mathrm{T}}e^{\tau \mathrm{A}}\varSigma\mathbf{b}, $$
(38)

where e τA is the matrix exponential defined by its series representation \(e^{{\tau }\mathrm {A}}={\sum }_{k=0}^{\infty }\frac {1}{k!}(\tau \mathrm {A})^{k}\). Note that by our rescaling of b, Eq. (36), the variance of η(t) is C in(0)=〈η 2〉=1. Furthermore, taking the Fourier transform of the correlation function yields the input power spectrum

$$ S_{\text{in}}(f)=-2\mathbf{b}^{\mathrm{T}}\left[\mathrm{A}^{2}+(2\pi f)^{2}\mathbb{I}_{d}\right]^{-1}\mathrm{A}\varSigma\mathbf{b}, $$
(39)

where \(\mathbb {I}_{d}\) is the d×d identity matrix.

1.2.2 Fokker-Planck equation

The stochastic dynamics 37a with fire-and-reset rule can be reformulated in terms of probability currents and densities. The probability current that describes the drift of V(t) is given by

$$ J_{v}(v,\mathbf{y},t)=(1+\epsilon \mathbf{b}\cdot\mathbf{y})p(v,\mathbf{y},t), $$
(40)

where p(v,y,t)=〈δ(vV(t))δ(yY(t))〉 is the probability density of V(t) and Y(t) at time t. The d-dimensional OUP corresponds to d probability currents

$$ J_{i}(v,\mathbf{y},t)={\sum}_{j=1}^{d}(A_{ij}y_{j}-D_{ij}\frac{\partial}{\partial_{y_{j}}})p(v,\mathbf{y},t) $$
(41)

Here, \(i=1,\dotsc ,d\) and the non-dimensionalized diffusion matrix is defined by \(\mathrm {D}=\frac {1}{2}\hat {\mathrm {B}}^{\mathrm {T}}\hat {\mathrm {B}}\). Because trajectories cannot leave the system, the probability is conserved, which can be expressed by the continuity equation

$$ \frac{{\partial}p}{{\partial}t}+\frac{{\partial}J_{v}}{\partial v}+{\sum}_{i=1}^{d}\frac{{\partial}J_{i}}{\partial y_{i}}=J_{v}(1,\mathbf{y},t)[\delta(v)-\delta(v-1)]. $$
(42)

The right hand side represents the reinjection and outflux of probability at the reset v=0 (source) and the threshold v=1 (sink), respectively, corresponding to the reset of trajectories. With the definition of the currents, Eqs. (40) and (41), the last equation can be written as a partial differential equation for p(v,y,t). Because the source and sink terms are sharply localized, we can omit these terms in the regions v<0, 0<v<1 and v>0 and we arrive at the Fokker-Planck equation (FPE), Eq. (3) with non-dimensionalized parameters μ=1 and σ=𝜖. The source and sink terms can be realized by imposing appropriate boundary conditions at v=0 and v=1. The sink term at v=1 ensures that no probability can be found beyond the threshold and thus there is no backflux of probability from above the threshold, i.e. J v (1,y,t)=0 for 1+𝜖 by<0. With Eq. (40) this implies the condition (cf. (Brunel and Sergi 1998))

$$ p(1,\mathbf{y},t)=0\qquad\forall \mathbf{y}:1+\epsilon\mathbf{b}\cdot\mathbf{y}<0. $$
(43)

Furthermore, integration of Eq. (42) from v=−ε to v=ε (and letting \(\varepsilon \rightarrow 0\)), reveals that J v (v,y,t) suffers a jump at v=0 by an amount J v (1,y,t). With Eq. (40) this implies the condition

$$ p(0+,\mathbf{y},t)-p(0-,\mathbf{y},t)=p(1,\mathbf{y},t), $$
(44)

where v=0− and v=0+ denote the left- and right-sided limits. Finally, probability should not leak across infinite boundaries and the probability density must be normalized:

$$ \lim_{v\rightarrow-\infty}p(v,\mathbf{y},t)=0,\quad \lim_{y_{i}\rightarrow\pm\infty}p(v,\mathbf{y},t)=0, $$
(45)
$$ \int\mathrm{d}v\int\mathrm{d}^{d}y\,p(v,\mathbf{y},t)=1. $$
(46)

In addition to the boundary conditions, the FPE must be supplied with an initial condition. Because we aim at the statistics of the interspike intervals, it is advantageous to consider an ensemble of neurons that have just fired a spike at time t=0. This corresponds to preparing the system in a state, in which the membrane potential has been reset to zero, i.e. V(0)=0, and the OUP is distributed according to the distribution of Y sampled at the moments of firing. If we denote this distribution by p spike(y), the initial condition for the FPE must be of the form p(v,y,0)=δ(v)p spike(y).

1.2.3 Distribution upon firing

In order to determine p spike(y), we first seek the stationary solution p s (v,y) of the FPE. Setting p/ t=0, we obtain the stationary FPE

$$ 0=(1+\epsilon \mathbf{b}^{\mathrm{T}}\mathbf{y})\frac{\partial p_{s}}{\partial v}+{\sum}_{i,j=1}^{d}\left(A_{ij}\frac{\partial}{\partial y_{i}}(y_{j}p_{s})-D_{ij}\frac{\partial^{2}p_{s}}{\partial y_{i}\partial y_{j}}\right), $$
(47)

which is still difficult to solve analytically due to the discontinuous boundary condition (43) at v=1. However, for weak noise, 𝜖≪1, the velocity 1+𝜖 by, is practically always positive, so that the boundary condition (43) can be safely ignored. A positive velocity also precludes trajectories from entering the region v<0. As a consequence, the system is restricted to the region 0≤v≤1 and has the periodic boundary condition

$$ p_{s}(0,\mathbf{y})=p_{s}(1,\mathbf{y}),\qquad \mathbf{y}\in\mathbb{R}^{d} $$
(48a)
$$ \lim_{y_{i}\rightarrow\pm\infty}p_{s}(v,\mathbf{y})=0,\qquad v\in[0,1] $$
(48b)
$$ \int\mathrm{d}^{d}y\,{{\int}_{0}^{1}}\mathrm{d}v\,p_{s}(v,\mathbf{y})=1. $$
(48c)

A solution of Eq. (47) is given by the stationary distribution of the OUP, i.e.

$$ p_{s}(v,\mathbf{y})=p_{s}(\mathbf{y})=\frac{1}{\sqrt{(2\pi)^{d}\left|\varSigma\right|}}\exp(-\frac{1}{2}\mathbf{y}^{\mathrm{T}}\varSigma^{-1}\mathbf{y}), $$
(49)

which clearly fulfills the boundary conditions 48a.

The distribution upon firing p spike(y) is proportional to the stationary probability current \(J_{v}^{(s)}(1,\mathbf {y})=(1+\epsilon \mathbf {b}\cdot \mathbf {y})p_{s}(1,\mathbf {y})\) across the threshold (Lindner 2004). In fact, the probability per unit time that a trajectory crosses the threshold through the surface element d S(y) at the point y on the hyperplane defined by v=1 is given by \(J_{v}^{(s)}(1,\mathbf {y})\mathrm {d} y_{1}\dotsb \mathrm {d} y_{d}\). Thus, the probability density upon firing must be proportional to \(J_{v}^{(s)}(1,\mathbf {y})\). The factor of proportionality can be found from the normalization condition, which yields p spike(y)=(1+𝜖 by)p s (1,y) and hence the initial distribution

$$ p(v,\mathbf{y},0)=\delta(v)(1+\epsilon \mathbf{b}\cdot\mathbf{y})p_{s}(\mathbf{y}). $$
(50)

1.2.4 Interspike interval density

An interspike interval is equivalent to the first-passage-time (FPT) with respect to the threshold V T=1, i.e. the time a system that has been prepared according to a spike at t=0 needs to reach the threshold for the first time. A standard approach to obtain the probability density of the FPT (i.e. the ISI density) is to consider an ensemble in which trajectories are not reset upon threshold crossing but are taken out of the system. As a consequence, the probability to find a trajectory with v<1,

$$ G(t)=\int\mathrm{d}^{d}y\,{\int}_{-\infty}^{1}\mathrm{d}v\,p(v,\mathbf{y},t), $$
(51)

is the probability that a trajectory survives until time t. Here, p(v,y,t) is the time-dependent solution of the FPE, for which the reset condition (44) has been dropped now. The FPT density is given by P(t)=−dG/dt, or

$$ P(t)=-\int\mathrm{d}^{d}y\,{\int}_{-\infty}^{1}\mathrm{d}v\,\partial_{t}p(v,\mathbf{y},t). $$
(52)

Using the continuity Eq. (42) and the natural boundary conditions (45) yields

$$ P(t)=\int\mathrm{d}^{d}y\,J_{v}(1,\mathbf{y},t)=\mathcal{P}(t,1), $$
(53)

where

$$ \mathcal{P}(t,v)=\int\mathrm{d}^{d}y\,(1+\epsilon \mathbf{b}^{\mathrm{T}}\mathbf{y})p(v,\mathbf{y},t) $$
(54)

is the FPT density with respect to a general threshold V T=v. Eq. (53) simply expresses that the FPT is equal to the total probability current through the threshold at time t. The ISI density can be rewritten as

$$ P(t)=\left.\frac{1}{2\pi}\int\mathrm{d}\ell\,e^{-i\ell v}\tilde P(\ell,t)\right|_{v=1}, $$
(55)

where \(\tilde P(t,\ell )=\int \mathrm {d}v\,e^{i\ell v}\mathcal {P}(t,v)\) is the Fourier transform with respect to v. Using Eq. (54), we thus have

$$ \tilde P(t,\ell)=\int\mathrm{d}v\,e^{i\ell v}\int\mathrm{d}^{d}y\,(1+\epsilon \mathbf{b}^{\mathrm{T}}\mathbf{y})p(v,\mathbf{y},t). $$
(56)

This function can be further related to the characteristic function

$$ q(\ell,\mathbf{k},t)=\int\mathrm{d}v\,e^{i\ell v}\int\mathrm{d}^{d}y\,e^{i\mathbf{k}\cdot\mathbf{y}}p(v,\mathbf{y},t), $$
(57)

where k is a d-dimensional vector with elements k i , \(i=1,\dotsc ,d\). Indeed, comparing (56) with (57), we find the relation

$$ \tilde P(t,\ell)=\left.q(\ell,\mathbf{k},t)-i\epsilon[{\nabla_{\mathbf{k}} q(\ell,\mathbf{k},t)}]^{\mathrm{T}}\mathbf{b}\right|_{\mathbf{k}=0}, $$
(58)

where ∇ k q denotes the gradient of q with respect to k, i.e ∇ k q is a vector with elements i q q/ k i . Thus, if the characteristic function q(,k,t) is known, one can derive the ISI density via (55) and (58). In order to compute q(,k,t), it is not necessary to calculate the probability density p(v,y,t) explicitly from the FPE, which does not permit a closed form solution for the initial conditions (50). However, applying the Fourier transform to the FPE directly yields a first-order differential equation for q(,k,t):

$$ \partial_{t}q-(\mathbf{k}^{\mathrm{T}}\mathrm{A}+\epsilon\ell\mathbf{b}^{\mathrm{T}})\nabla_{\mathbf{k}} q=(-\mathbf{k}^{\mathrm{T}}\mathrm{D}\mathbf{k}+i\ell)q $$
(59)

with initial condition

$$ q(\ell,\mathbf{k},0)=(1+i\epsilon\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{k})\exp(-\mathbf{k}^{\mathrm{T}}\mathrm{D}\mathbf{k}). $$
(60)

This equation can be solved by the method of characteristics (see e.g. (Kamke 1965)). The characteristic equations are

$$ \dot{\mathbf{k}}=-\mathrm{A}^{\mathrm{T}}\mathbf{k}-\epsilon\ell\mathbf{b} $$
(61)
$$ \dot{q}=(-\mathbf{k}^{\mathrm{T}}\mathrm{D}\mathbf{k}+i\ell)q. $$
(62)

The solution of the first equation is

$$ \mathbf{k}(t,\mathbf{c})=e^{-t{\mathrm{A}^{\mathrm{T}}}}\mathbf{c}-{\epsilon\ell{\int}_{0}^{t}}\mathrm{d}t^{\prime}\, e^{-(t-t^{\prime})\mathrm{A}^{\mathrm{T}}}\mathbf{b}, $$
(63)

which can be solved for the initial vector c:

$$ \mathbf{c}(t,\mathbf{k})=e^{t\mathrm{A}^{\mathrm{T}}}\mathbf{k}+{\epsilon\ell{\int}_{0}^{t}}\mathrm{d}t^{\prime}\, e^{t^{\prime}\mathrm{A}^{\mathrm{T}}}\mathbf{b}. $$
(64)

The solution of Eq. (62) is given by

$$\begin{array}{@{}rcl@{}} q(t,\mathbf{c})&=&Q_{0}(\mathbf{c})\\ &&\times\exp\left(-\frac{1}{2}\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}+i\ell t\right.\\ &&\qquad\quad\,-\left.\epsilon\ell\mathbf{b}^{\mathrm{T}}{\varSigma{\int}_{0}^{t}}\mathrm{d}t^{\prime}\,\mathbf{k}(t^{\prime},\mathbf{c})\right), \end{array} $$
(65)

where the Lyapunov equation (34) has been used. The prefactor Q 0(c) can be determined from the initial condition (60) as \( Q_{0}=1+i\epsilon \mathbf {b}^{\mathrm {T}}\varSigma \mathbf {c}\). The integration in Eq. (65) has to be performed using the expression (63), followed by a re-substitution of the vector c=c(t,k) according to Eq. (64). As a result we obtain the characteristic function

$$\begin{array}{@{}rcl@{}} q(\ell,\mathbf{k},t)&=&(1+i\epsilon\mathbf{b}^{\mathrm{T}}\varSigma e^{t\mathrm{A}^{\mathrm{T}}}\mathbf{k}+i\epsilon^{2}\ell g(t))\\ &&\times\exp\left(-\frac{1}{2}\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}+i\ell t\right.\\ &&\left.\qquad\quad\,-\epsilon\ell\mathbf{b}^{\mathrm{T}}{\varSigma{\int}_{0}^{t}}\mathrm{d}t^{\prime}\, e^{t^{\prime}\mathrm{A}^{\mathrm{T}}}\mathbf{k}-\epsilon^{2}\ell^{2} h(t)\right), \end{array} $$

where

$$ g(t)={{\int}_{0}^{t}}\,\mathrm{d}t^{\prime}C_{\text{in}}({t^{\prime})}\qquad h(t)={{\int}_{0}^{t}}\mathrm{d}t^{\prime\prime}\,{\int}_{0}^{t^{\prime\prime}}\mathrm{d}t^{\prime}\,C_{\text{in}}(t^{\prime}),\qquad $$
(66)

Using Eq. (58) leads to

$$\tilde{P}(t,\ell)=[1+\epsilon^{2}C_{\text{in}}(t)+2i\epsilon^{2}\ell g(t)-\epsilon^{4}\ell^{2} g^{2}(t)]e^{i\ell t-\epsilon^{2}\ell^{2} h(t)}. $$

and

$$\begin{array}{@{}rcl@{}} P(t)&=&\frac{1}{2\sqrt{4\pi\epsilon^{2} h^{3}(t)}}\exp\left[-\frac{(t-1)^{2}}{4\epsilon^{2} h(t)}\right]\\ &&\times\left\{\frac{[(1-t) g(t)+2 h(t)]^{2}}{2 h(t)}\right.\\ &&\left.\quad\,\,\,-\epsilon^{2}\left[g^{2}(t)-2 h(t)C_{\text{in}}({t})\right]\vphantom{\frac{[(1-t) g(t)+2 h(t)]^{2}}{2 h(t)}}\right\}, \end{array} $$
(67)

which constitutes the final result for the ISI density. Let us provide two alternative expressions for h(t) and g(t). First, if A is invertible we can write these functions in terms of the drift matrix:

$$\begin{array}{@{}rcl@{}} h(t)&=&\mathbf{b}^{\mathrm{T}}\varSigma{(\text{A}^{\mathrm{T}})^{-2}\left[e^{t\mathrm{A}^{\mathrm{T}}}-t\text{A}^{\mathrm{T}} -\mathbb{I}_{d}\right]}\mathbf{b} \end{array} $$
(68)
$$\begin{array}{@{}rcl@{}} g(t)&=&\mathbf{b}^{\mathrm{T}}\varSigma{(\text{A}^{\mathrm{T}})^{-1}\left[e^{t\mathrm{A}^{\mathrm{T}}}-\mathbb{I}_{d}\right]}\mathbf{b}. \end{array} $$
(69)

Second, using the power spectrum of the noise

$$ S_{\text{in}}(f)={\int}_{-\infty}^{\infty}\mathrm{d}\tau\,C_{\text{in}}{(\tau)}e^{2\pi if\tau}, $$
(70)

we find

$$\begin{array}{@{}rcl@{}} h(t)&=&\frac{t^{2}}{2}{\int}_{-\infty}^{\infty}\mathrm{d}f\,S_{\text{in}}(f)\text{sinc}^{2}\,(ft),\\ g(t)&=&t{\int}_{-\infty}^{\infty}\mathrm{d}f\,S_{\text{in}}(f)\text{sinc}\,(2ft), \end{array} $$

where \(\text {sinc}(x)=\sin (\pi x)/(\pi x)\).

1.2.5 Cumulants of the n-th order intervals

Apart from the ISI density the ISI statistics can be characterized by its cumulants (semi-invariants). We will calculate these cumulants also for the nth-order intervals because they will be used to find an expression for the SCC and the calculation does not pose any extra difficulties. In fact, as argued in the main text, the time t n of the n-th threshold crossing, i.e. the nth-order interval, is equal to the first-passage time of the free process with respect to the nth-fold threshold V T=n (given the same noise realization). The probability density of the nth-order interval is thus given by \(\mathcal {P}(t,n)\) (cf. Eq. (54)). The k-th cumulants of the nth-order intervals is given by (van Kampen 1992)

$$ \kappa_{k}(n)=(-1)^{k}\left.\frac{\mathrm{d}^{k}\ln\bar{P}_{n}}{\mathrm{d}s^{k}}\right|_{s=0}, $$
(71)

where

$$ \bar{P}_{n}(s)={\int}_{0}^{\infty}\mathrm{d}t\,e^{-st}\mathcal{P}(t,n). $$
(72)

is the Laplace transform of the n-th-order interval density, which plays the role of a moment generating function. Introducing the Laplace-Fourier transform of the probability density

$$ \varphi(v,\mathbf{k},s)={\int}_{0}^{\infty}\mathrm{d}t\,e^{-st}\int\mathrm{d}^{d}y\,e^{i\mathbf{k}\cdot\mathbf{y}}p (v,\mathbf{y},t), $$
(73)

the moment generating function can be rewritten as

$$ \bar{P}_{n}(s)=\left[\varphi-i\epsilon\mathbf{b}\cdot\nabla_{\mathbf{k}}\varphi\right]_{\mathbf{k}=0, v=n}. $$
(74)

The function φ satisfies the transformed FPE

$$\begin{array}{@{}rcl@{}} \partial_{v}\varphi&-&\mathbf{k}^{\mathrm{T}}\mathrm{A}\nabla_{\mathbf{k}}\varphi=\!-(s+\mathbf{k}^{\mathrm{T}} \mathrm{D}\mathbf{k})\varphi+\delta(v) \exp\left(-\frac{\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}}{2}\right)\\ &+&i\epsilon\mathbf{b}^{\mathrm{T}}\left({\partial_{v}\nabla_{\mathbf{k}}\varphi+\delta(v)\varSigma\mathbf{k}\exp \left(-\frac{\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}}{2}\right)}\right). \end{array} $$
(75)

The mixed derivative v k φ makes an exact solution infeasible. However, using the perturbation expansion

$$ \varphi=\varphi^{(0)}+\epsilon\varphi^{(1)}+\epsilon^{2}\varphi^{(2)}+\dotsb, $$
(76)

yields a hierarchy of first-order partial differential equations for the coefficients φ (k)(v,k,s). Specifically, we obtain

$$ \partial_{v}\varphi^{(m)}-\mathbf{k}^{\mathrm{T}}\mathrm{A}\nabla_{\mathbf{k}}\varphi^{(m)}=- ({s+\mathbf{k}^{\mathrm{T}}\mathrm{D}\mathbf{k}})\varphi^{(m)}+I_{m}, $$
(77)

where the the inhomogeneities I m are given by

$$\begin{array}{@{}rcl@{}} I_{0}&=&\delta(v)\exp\left(-\frac{\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}}{2}\right) \end{array} $$
(78)
$$\begin{array}{@{}rcl@{}} I_{1}&=&i\mathbf{b}^{\mathrm{T}}\partial_{v}\nabla_{\mathbf{k}}\varphi^{(0)}+i\delta(v)\mathbf{b}^{\mathrm{T}} \varSigma\mathbf{k}\exp\left(-\frac{\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}}{2}\right) \end{array} $$
(79)
$$\begin{array}{@{}rcl@{}} I_{m}&=&i\mathbf{b}^{\mathrm{T}}\partial_{v}\nabla_{\mathbf{k}}\varphi^{(m-1)},\qquad m\ge 2. \end{array} $$
(80)

If we insert the ansatz (76) into the formula (74), we obtain a perturbation series for the moment generating function:

$$ \bar{P}_{n}(s)=\bar{P}_{n}^{(0)}(s)+\epsilon\bar{P}_{n}^{(1)}(s)+\epsilon^{2}\bar{P}_{n}^{(2)}(s)+{\cdots} , $$
(81)

where \(\bar {P}_{n}^{(0)}(s)=\varphi ^{(0)}(n,0,s)\) and

$$ \bar{P}_{n}^{(m)}(s)=\left[\varphi^{(m)}-i\mathbf{b}^{\mathrm{T}}\nabla_{\mathbf{k}}\varphi^{(m-1)}\right]_{\mathbf{k}=0, v=n} $$
(82)

for m≥1. Equation (77) can be again solved by the method of characteristics. To this end, the solution surface \(\left (v,\mathbf {k},\varphi ^{(m)}(v,\mathbf {k},s)\right )\) is parametrized by a parameter τ and d initial constants c i , \(i=1,\dotsc ,d\). The total derivative of φ (m) with respect to τ is then \(\mathrm {d}\varphi ^{(m)}/\mathrm {d}\tau =\frac {\mathrm {d}v}{\mathrm {d}\tau }\partial _{v}\varphi ^{(m)}+ \frac {\mathrm {d}\mathbf {k}^{\mathrm {T}}}{\mathrm {d}\tau }\nabla _{\mathbf {k}}\varphi ^{(m)}\). Comparison with Eq. (77) yields the characteristic equations

$$\begin{array}{@{}rcl@{}} \frac{\mathrm{d}v}{\mathrm{d}\tau}&=&1, \end{array} $$
(83)
$$\begin{array}{@{}rcl@{}} \frac{\mathrm{d}\mathbf{k}}{\mathrm{d}\tau}&=&-{A}^{\mathrm{T}}\mathbf{k}, \end{array} $$
(84)
$$\begin{array}{@{}rcl@{}} \frac{\mathrm{d}\varphi^{(m)}}{\mathrm{d}\tau}&=&-(s+\mathbf{k}^{\mathrm{T}}\mathrm{D} \mathbf{k})\varphi^{(m)}+I_{m}. \end{array} $$
(85)

The integration of the first two equations results in

$$ v=\tau,\qquad\mathbf{k}=e^{-\tau{A}^{\mathrm{T}}}\mathbf{c}. $$
(86)

To solve the third equation, we make the ansatz

$$ \varphi^{(m)}(\tau,\mathbf{c})=\exp\left(-s\tau-\frac{1}{2}\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k}\right)\psi^{(m)} (\tau,\mathbf{c}). $$
(87)

Using this ansatz in Eq. (85) yields

$$ \frac{\mathrm{d}\psi^{(m)}}{\mathrm{d}\tau}=-\left(\mathbf{k}^{\mathrm{T}}\mathrm{D}\mathbf{k}-\frac{1}{2} \frac{\mathrm{d}}{\mathrm{d}\tau}(\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k})\right)\psi^{(m)}+\hat{I}_{m} (\tau,\mathbf{c}) $$
(88)

where \(\hat {I}_{n}(\tau ,\mathbf {c})=I_{n}\exp \left (s\tau +\frac {1}{2}\mathbf {k}^{\mathrm {T}}\varSigma \mathbf {k}\right )\). In terms of the lower-order functions ψ n−1, the inhomogeneous parts can be written as

$$\begin{array}{@{}rcl@{}} \hat{I}_{0}&=&\delta(\tau), \end{array} $$
(89)
$$\begin{array}{@{}rcl@{}} \hat{I}_{1}&=&i\mathbf{b}^{\mathrm{T}}\left[\partial_{v}\nabla_{\mathbf{k}}\psi^{(0)}-\varSigma\mathbf{k} \partial_{v}\psi^{(0)}\vphantom{\left(\nabla_{\mathbf{k}}\psi^{(0)}-\varSigma\mathbf{k}\psi^{(0)}\right)}\right.\\ &&\left.-s\left(\nabla_{\mathbf{k}}\psi^{(0)}-\varSigma\mathbf{k}\psi^{(0)}\right)+\delta(\tau) \varSigma\mathbf{k}\right] \end{array} $$
(90)
$$\begin{array}{@{}rcl@{}} \hat{I}_{m}&=&i\mathbf{b}^{\mathrm{T}}\left[\partial_{v}\nabla_{\mathbf{k}}\psi^{(m-1)}-\varSigma\mathbf{k}\partial_{v} \psi^{(m-1)}\vphantom{\left(\nabla_{\mathbf{k}}\psi^{(m-1)}-\varSigma\mathbf{k}\psi^{(m-1)}\right)}\right.\\ &&\left.-s\left(\nabla_{\mathbf{k}}\psi^{(m-1)}-\varSigma\mathbf{k}\psi^{(m-1)}\right)\right],\quad m\ge 2. \end{array} $$
(91)

In order to regard these expressions as functions of τ and c, the vector k has to be understood as \(\mathbf {k}(\tau ,\mathbf {c})=e^{-\tau \text {A}^{\text {T}}}\mathbf {c}\). It turns out that in Eq. (88), the prefactor in front of ψ (m) vanishes because in light of Eqs. (34) and (84)

$$\begin{array}{@{}rcl@{}} \frac{\mathrm{d}}{\mathrm{d}\tau}(\mathbf{k}^{\mathrm{T}}\varSigma\mathbf{k})&=&\dot{\mathbf{k}}^{\text{T}}\varSigma\mathbf{k} +\mathbf{k}^{\mathrm{T}}\varSigma\dot{\mathbf{k}}=-(\mathbf{k}^{\mathrm{T}}\mathrm{A}\varSigma\mathbf{k} +\mathbf{k}^{\mathrm{T}}\varSigma{\mathrm{A}^{\mathrm{T}}\mathbf{k}})\\ &=&-\mathbf{k}^{\mathrm{T}}(\mathrm{A}\varSigma+\varSigma{\mathrm{A}}^{\mathrm{T}})\mathbf{k} =2\mathbf{k}^{\mathrm{T}}\mathrm{D}\mathbf{k}. \end{array} $$
(92)

Thus, ψ (m) can be easily integrated:

$$ \psi^{(m)}(\tau,\mathbf{c})={\int}_{-\infty}^{\tau}\mathrm{d}\tau^{\prime}\,\hat{I}_{m}(\tau^{\prime},\mathbf{c}). $$
(93)

For the evaluation of this formula, the derivatives of the lower-order functions ψ (m−1)(τ,c) with respect to v and k are needed. To this end, it is useful to apply the chain rule: for a given scalar function f(τ,c) we can write

$$ {(\nabla_{\mathbf{k}} f)}^{\mathrm{T}}={[\partial_{\tau} f \nabla_{\mathbf{k}} \tau+(\nabla_{\mathbf{k}}\mathbf{c}){\nabla}_{\mathbf{c}} f]}^{\mathrm{T}}={(\nabla}_{\mathbf{c} f})^{\mathrm{T}e^{\tau\text{A}^{\text{T}}}} $$
(94)

and

$$ \partial_{v} f=\partial_{\tau} f+{(\nabla_{\mathbf{c}} f)}^{\mathrm{T}}{A}^{\mathrm{T}}\mathbf{c}. $$
(95)

In Eq. (94), we made use of ∇ k τ=0 and \(\nabla _{\mathbf {k}}\mathbf {c}=\left (e^{\tau \text {A}^{\text {T}}}\right )^{\mathrm {T}}=e^{\tau A}\), where ∇ k c is a matrix with elements (∇ k c) i j = c j / k i . In Eq. (95), the relation \(\partial _{v}\mathbf {c}=\partial _{v}e^{v\text {A}^{\text {T}}}\mathbf {k}={A}^{\mathrm {T}}\mathbf {c}\) was employed.

For m≥1, the perturbation coefficient \(\bar {P}_{n}^{(m)}(s)\) can be expressed in terms of ψ (m−1)(τ,c) by using Eqs. (82), (93) and (95):

$$\begin{array}{@{}rcl@{}} &&\bar{P}_{n}^{(m)}(s)=e^{-sn}\left[\psi^{(m)}-i\mathbf{b}^{\mathrm{T}}\nabla_{\mathbf{k}}\psi^{(m-1)}\right]_{\mathbf{c}=0, \tau=n},\\ &&=e^{-sn}\left\{{\int}_{-\infty}^{\tau}\mathrm{d}\tau^{\prime}\,i\mathbf{b}^{\mathrm{T}}\left[\partial_{v}\nabla_{\mathbf{k}} \psi^{(m-1)}-s\nabla_{\mathbf{k}}\psi^{(m-1)}\right]\right.\\ &&\quad\left.-i\mathbf{b}^{\mathrm{T}}\nabla_{\mathbf{k}}\psi^{(m-1)}\vphantom{{\int}_{-\infty}^{\tau}\mathrm{d}\tau} \right\}_{\mathbf{c}=0, \tau=n},\\ &&=-ise^{-sn}\left[{\int}_{0}^{n}\mathrm{d}\tau^{\prime}\,\mathbf{b}^{\mathrm{T}}\nabla_{\mathbf{k}}\psi^{(m-1)}(\tau^{\prime}, \mathbf{c})\right]_{\mathbf{c}=0}. \end{array} $$
(96)

For m=0, we obtain from Eq. (93)

$$ \psi^{(0)}(\tau,\mathbf{c})=\theta(\tau). $$
(97)

Equation (87) then yields the leading-order of the moment-generating function

$$ \bar{P}_{n}^{(0)}(s)=\varphi^{(0)}(n,0,s)=e^{-sn}. $$
(98)

The derivatives of ψ (0) simply read

$$ \partial_{v}\psi^{(0)}=\delta(\tau),\qquad \nabla_{\mathbf{k}}\psi^{(0)}=0. $$
(99)

Because the gradient in Eq. (96) equals zero for m=1 (Eq. (99)), the first-order contribution to \(\bar {P}_{n}(s)\) vanishes:

$$ \bar{P}_{n}^{(1)}(s)=0. $$
(100)

From Eqs. (90) and (93) we obtain

$$ \psi^{(1)}(\tau,\mathbf{c})=i\theta(\tau)s{\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}. $$
(101)

Using Eq. (94), the gradient of ψ (1) reads

$$ {\left(\nabla_{\mathbf{k}} \psi^{(1)}\right)}^{\mathrm{T}}=i\theta(t)s{\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{\tau_{1}\text{A}^{\text{T}}}. $$
(102)

Furthermore, the derivative with respect to v is

$$\begin{array}{@{}rcl@{}} \partial_{v}\psi^{(1)}&=&i\theta(t)s\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau\text{A}^{\text{T}}}\mathbf{c}\vphantom{{\int}_{0}^{\tau}\mathrm{d}\tau_{1}}\right. \\ &&\left.+{\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}{A}^{\mathrm{T}}\mathbf{c}\right)\\ &=&i\theta(t)s\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c}. \end{array} $$

and the mixed derivative takes the form

$$ {\left(\nabla_{\mathbf{k}}\partial_{v}\psi^{(1)}\right)}^{\mathrm{T}}=i\theta(t)s\mathbf{b}^{\mathrm{T}}\varSigma e^{\tau\text{A}^{\text{T}}}. $$
(103)

For m=2, we find the first non-vanishing correction of the moment-generating function due to noise:

$$ \bar{P}_{n}^{(2)}(s)=s^{2}e^{-sn}{{\int}_{0}^{n}}\mathrm{d}\tau_{2}\,{\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\, C_{\text{in}}{(\tau_{1})}. $$
(104)

Here, C in(t) is the noise correlation function as defined in Eq. (38). Furthermore, we obtain

$$\begin{array}{@{}rcl@{}} \psi^{(2)}&=&-\theta(\tau)s{\int}_{0}^{\tau}\mathrm{d}\tau_{2}\,\left\{\vphantom{{\int}_{0}^{\tau}} \mathbf{b}^{\mathrm{T}}\varSigma e^{\tau_{2}\text{A}^{\text{T}}}\mathbf{b}\right.\\ &&- \left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2} \text{A}^{\text{T}}}\mathbf{c}\right)(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})\\ &&-s\left[{\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{\tau_{1}\text{A}^{\text{T}}} \mathbf{b}\right.\\ &&\left.\left.-\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right) {\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right]\right\} \end{array} $$
(105)

and the gradient

$$\begin{array}{@{}rcl@{}} {\left(\nabla_{\mathbf{k}} \psi^{(2)}\right)}^{\mathrm{T}}&\!=&\!-\theta(\tau)s{\int}_{0}^{\tau}\mathrm{d}\tau_{2}\,\left\{\vphantom{{\int}_{0}^{\tau}} -(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})\mathbf{b}^{\mathrm{T}}\varSigma e^{(\tau-\tau_{2})\text{A}^{\text{T}}}\right.\\ &&\!-\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right)\mathbf{b}^{\mathrm{T}} \varSigma e^{\tau\text{A}^{\text{T}}}\\ &&\!+s\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right) {\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{(\tau-\tau_{1})\text{A}^{\text{T}}}\\ &&\left.\!+s\left({\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1} \text{A}^{\text{T}}}\mathbf{c}\right)\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{(\tau-\tau_{2})\text{A}^{\text{T}}}\right)\right\}. \end{array} $$
(106)

The derivative with respect to v is more involved: using Eq. (95) first leads to

$$\begin{array}{@{}rcl@{}} \partial_{v}\psi^{(2)}&\,=\,&-\theta(\tau)s\left\{\vphantom{{\int}_{0}^{\tau}} C_{\text{in}}{(\tau)}-\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau\text{A}^{\text{T}}}\mathbf{c}\right)(\mathbf{b}^{\mathrm{T}} \varSigma\mathbf{c})\right.\\ &&-s\left[{\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,C_{\text{in}}{(\tau-\tau_{1})-}\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau \text{A}^{\text{T}}}\mathbf{c}\right){\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}} \varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right]\\ &&+{\int}_{0}^{\tau}\mathrm{d}\tau_{2}\,\left[\vphantom{{\int}_{0}^{\tau}} -(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}} {A}^{\mathrm{T}}\mathbf{c}\right.\\ &&-\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right)\mathbf{b}^{\mathrm{T}} \varSigma{A}^{\mathrm{T}}\mathbf{c}\\ &&+s\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right) {\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}} {A}^{\mathrm{T}}\mathbf{c}\\ &&\left.\left.+s\left({\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A} ^{\text{T}}}\mathbf{c}\right)(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}{A}^{\mathrm{T}}\mathbf{c}) \right]\right\}. \end{array} $$
(107)

Using the identity \(e^{-\tau \text {A}^{\text {T}}}{A}^{\mathrm {T}}=-\frac {d}{d\tau }e^{\tau \text {A}^{\text {T}}}\) and integrating by parts simplifies the expression to

$$\begin{array}{@{}rcl@{}} \partial_{v}\psi^{(2)}&=&-\theta(\tau)s\left\{\vphantom{{\int}_{0}^{\tau}} \mathbf{b}^{\mathrm{T}}\varSigma e^{\tau\text{A}^{\text{T}}}\mathbf{b}-(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})^{2}\right.\\ &&-(\mathbf{b}^{\mathrm{T}}\varSigma{A}^{\mathrm{T}}\mathbf{c}){\int}_{0}^{\tau}\mathrm{d}\tau_{1}\, \left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right)\\ &&-s\left[{\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{\tau_{1}\text{A}^{\text{T}}}\mathbf{b}\right.\\ &&\left.\left.-(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c}){\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}} \varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right]\right\}. \end{array} $$
(108)

Taking the derivative of the last equation with respect to k results in

$$\begin{array}{@{}rcl@{}} {(\nabla_{\mathbf{k}}\partial_{v}\psi^{(2)})}^{\mathrm{T}}&\,=\,&-\theta(\tau)s\left\{\vphantom{{\int}_{0}^{\tau}} -2(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})\mathbf{b}^{\mathrm{T}}\varSigma e^{\tau\text{A}^{\text{T}}}\right.\\ &&+ \left[{\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right]\mathbf{b}^{\mathrm{T}}\varSigma\left(s\mathbb{I}_{d}-{A}^{\mathrm{T}}\right) e^{\tau\text{A}^{\text{T}}}\\ &&\left.+\left(\mathbf{b}^{\mathrm{T}}\varSigma\left(s\mathbb{I}_{d}-{A}^{\mathrm{T}}\right)\mathbf{c}\right) {\int}_{0}^{\tau}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{\tau_{1}\text{A}^{\text{T}}}\right\}, \end{array} $$
(109)

where \(\mathbb {I}_{d}\) denotes the d×d identity matrix.

Because the gradient ∇ k ψ (2) vanishes for c=0, we find

$$ \bar{P}_{n}^{(3)}(s)=0. $$
(110)

Using the derivatives in Eqs. (106), (108) and (109) we obtain the third-order perturbation solution

$$\begin{array}{@{}rcl@{}} &&{} \psi^{(3)}=-i\theta(\tau)\frac{s}{\mu^{4}}{\int}_{0}^{\tau}\mathrm{d}\tau_{3}\,\left\{\vphantom{{\int}_{0}^{\tau}} -2(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})C_{\text{in}}{(\tau_{3})}\right.\\ &&{} +\left({\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right)\left(\mathbf{b}^{\mathrm{T}}\varSigma\left(2s\mathbb{I}_{d}- {A}^{\mathrm{T}}\right) e^{\tau_{3}\text{A}^{\text{T}}}\mathbf{b}\right)\\ &&{} +\left(\mathbf{b}^{\mathrm{T}}\varSigma\left(2s\mathbb{I}_{d}-{A}^{\mathrm{T}}\right)\mathbf{c}\right)\left({\int}_{0}^{\tau_{3}} \mathrm{d}\tau_{1}\,C_{\text{in}}(\tau_{1})\right)\\ &&{} -\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{3}\text{A}^{\text{T}}}\mathbf{c}\right)\left[ C_{\text{in}}(\tau_{3}) -(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})^{2}\right.\\ &&{} -(\mathbf{b}^{\mathrm{T}}\varSigma{A}^{\mathrm{T}}\mathbf{c}){\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,\left(\mathbf{b}^{\mathrm{T}} \varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right)\\ &&{} \left. -s\left({\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,C_{\text{in}}(\tau_{1}) -(\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c}){\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right)\right]\\ &&{} -s^{2}{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\,\left[\vphantom{{\int}_{0}^{\tau}} \left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right){\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1} \,C_{\text{in}}(\tau_{3}-\tau_{1})\right.\\ &&{} \left.+\left({\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right) C_{\text{in}}{(\tau_{3}-\tau_{2})}\right]\\ &&{} +s\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{3}\text{A}^{\text{T}}}\mathbf{c}\right){\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\, \left[\vphantom{{\int}_{0}^{\tau}} C_{\text{in}}{(\tau_{2})}-\left(\mathbf{b}^{\mathrm{T}}\varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right) (\mathbf{b}^{\mathrm{T}}\varSigma\mathbf{c})\right.\\ &&{} \left.\left.-s\left({\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,C_{\text{in}}{(\tau_{1})}-\left(\mathbf{b}^{\mathrm{T}} \varSigma e^{-\tau_{2}\text{A}^{\text{T}}}\mathbf{c}\right){\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\mathbf{b}^{\mathrm{T}} \varSigma e^{-\tau_{1}\text{A}^{\text{T}}}\mathbf{c}\right)\right]\right\}.\\ \end{array} $$
(111)

Differentiating this expression with respect to k, setting c=0, and taking the scalar product with the vector b leads to

$$\begin{array}{@{}rcl@{}} &&{}\left. \left(\nabla_{\mathbf{k}}\psi^{(3)}\right)^{\mathrm{T}}\mathbf{b}\right|_{\mathbf{c}=0}=i\theta(\tau)s{\int}_{0}^{\tau} \mathrm{d}\tau_{3}\,\left\{\vphantom{{\int}_{0}^{\tau_{3}}} 2C_{\text{in}}{(\tau_{3})}C_{\text{in}}{(\tau)}\right.\\ &&{} +C_{\text{in}}{(\tau-\tau_{3})}C_{\text{in}}{(\tau_{3})}\,+\,{\int}_{0}^{\tau_{3}} \mathrm{d}\tau_{1}\,\left[C_{\text{in}}{(\tau\!-\tau_{1})}C_{\text{in}}^{\prime}{(\tau_{3})}\,+\,C_{\text{in}}(\tau_{1})C^{\prime}_{\text{in}}(\tau)\right]\\ &&{} -\!2s{\int}_{0}^{\tau_{3}}\!\mathrm{d}\tau_{1}\,[C_{\text{in}}{(\tau\,-\,\tau_{1})}C_{\text{in}}{(\tau_{3})}\,+\,C_{\text{in}} {(\tau_{1})}C_{\text{in}}{(\tau)}\,+\,C_{\text{in}}{(\tau_{1})}C_{\text{in}}{(\tau\,-\,\tau_{3})}]\\ &&{} +s^{2}{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\,{\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\left[C_{\text{in}} {(\tau-\tau_{2})}C_{\text{in}}{(\tau_{3}-\tau_{1})}\right.\\ &&{} \left.\left.+C_{\text{in}}{(\tau-\tau_{1})}C_{\text{in}}{(\tau_{3}-\tau_{2})} +C_{\text{in}}{(\tau-\tau_{3})}C_{\text{in}}{(\tau_{2}-\tau_{1})}\right]\vphantom{{\int}_{0}^{\tau_{2}}}\right\} \end{array} $$
(112)

Here, \(C_{\text {in}}^{\prime }(\tau )\) denotes the derivative of the correlation function with respect to τ.

Finally, using Eqs. (96) and (112) yields the second non-vanishing correction:

$$\begin{array}{@{}rcl@{}} &&\bar{P}_{n}^{(4)}(s)=e^{-sn}s^{2}{{\int}_{0}^{n}}\mathrm{d}\tau_{4}\,{\int}_{0}^{\tau_{4}}\mathrm{d}\tau_{3}\,\left\{\vphantom{{\int}_{0}^{\tau_{3}}} 2C_{\text{in}}{(\tau_{3})}C_{\text{in}}{(\tau_{4})}\right.\\ &&+C_{\text{in}}{(\tau_{4}\,-\,\tau_{3})}C_{\text{in}}{(\tau_{3})} \,+\,{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,[C_{\text{in}}{(\tau_{4}\,-\,\tau_{1})}C_{\text{in}}^{\prime}(\tau_{3}) \,+\,C_{\text{in}}{(\tau_{1})} C_{\text{in}}^{\prime}(\tau_{4})]\\ &&-2s{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,[C_{\text{in}}{(\tau_{4}\,-\,\tau_{1})}C_{\text{in}}{(\tau_{3})}\,+\, C_{\text{in}}{(\tau_{1})}C_{\text{in}}{(\tau_{4})}\,+\,C_{\text{in}}{(\tau_{1})}C_{\text{in}}{(\tau_{4}\,-\,\tau_{3})}]\\ &&+s^{2}{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\,{\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\,\left[C_{\text{in}} {(\tau_{4}-\tau_{2})}C_{\text{in}}{(\tau_{3}-\tau_{1})}\right.\\ &&\left.+C_{\text{in}}{(\tau_{4}-\tau_{1})}C_{\text{in}} {(\tau_{3}-\tau_{2})}\left.+C_{\text{in}}{(\tau_{4}-\tau_{3})}C_{\text{in}}{(\tau_{2}-\tau_{1})}\right]\vphantom{{\int}_{0}^{\tau_{2}}}\right\}. \end{array} $$
(113)

To summarize, the moment-generating function reads up to fourth order in 𝜖:

$$ \bar{P}_{n}(s)\,=\,e^{-sn}\left(\!1\,+\,\epsilon^{2}s^{2}{{\int}_{0}^{n}}\mathrm{d}\tau_{2}\,{\int}_{0}^{\tau_{2}}\!\mathrm{d}\tau_{1} \,C_{\text{in}}{(\tau_{1})}\!\right)+\epsilon^{4}\bar{P}_{n}^{(4)}(s). $$
(114)

This formula satisfies the general property \(\bar {P}_{n}(0)=1\) of a moment-generation function, which expresses the normalization \({\int }_{0}^{\infty }\mathrm {d}t\,P_{n}(t)=1\) of the nth-order interval density.

Knowing the moment-generating function, Eq. (114), we can compute the cumulants of the nth-order intervals by means of Eq. (71) up to fourth-order in 𝜖. As expected, the first cumulant, or mean nth-order interval, is given by

$$ \kappa_{1,n}\equiv \langle t_{n}\rangle=n, $$
(115)

which is consistent with the fact that \(\langle t_{n}\rangle =\langle T_{1}\rangle +\dotsb +\langle T_{n}\rangle =n\langle T_{i}\rangle =n\) (recall that 〈T i 〉=1 in the non-dimensionalized system).

The second cumulant is equal to the variance \(\langle {\Delta } {t_{n}^{2}}\rangle \) of the nth-order intervals, where Δt n =t n −〈T i 〉. From the formula for the cumulants, Eq. (71), it can be seen that only the coefficients of the quadratic terms s 2 in Eqs. (113) and (114) enter the expression for the second cumulant (\(\bar {P}_{n}(s)\) has no linear terms). Thus, we find

$$\begin{array}{@{}rcl@{}} \kappa_{2,n}\!\!&\equiv\!&\langle{\Delta} {t_{n}^{2}}\rangle=2\epsilon^{2}{{\int}_{0}^{n}}\mathrm{d}\tau_{2}\,{\int}_{0}^{\tau_{2}}\mathrm{d}\tau_{1}\, C_{\text{in}}{(\tau_{1})}\\ &&+2\epsilon^{4}{{\int}_{0}^{n}}\mathrm{d}\tau_{4}\,{\int}_{0}^{\tau_{4}} \mathrm{d}\tau_{3}\,\left\{ \vphantom{{\int}_{0}^{\tau_{3}}}2C_{\text{in}}{(\tau_{3})}C_{\text{in}}{(\tau_{4})}\right.\\ &&+ C_{\text{in}}{(\tau_{4}\,-\,\tau_{3})} C_{\text{in}}{(\tau_{3})}\\ &&\left.+{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{1}\,\left[C_{\text{in}}{(\tau_{4}\,-\,\tau_{1})}C_{\text{in}}^{\prime}(\tau_{3})\!+C_{\text{in}} {(\tau_{1})}C_{\text{in}}^{\prime}(\tau_{4})\right]\right\}\\ &=&2\epsilon^{2} h(n)+2\epsilon^{4}\left[ g^{2}(n)+C_{\text{in}}{(n)} h(n)\right]. \end{array} $$
(116)

The third cumulant only involves the cubic terms in Eq. (113), hence

$$\begin{array}{@{}rcl@{}} \kappa_{3,n}&\!\equiv\!&\langle{\Delta} {t_{n}^{3}}\rangle\,=\,12\epsilon^{4}{{\int}_{0}^{n}}\mathrm{d}\tau_{3}{\int}_{0}^{\tau_{3}}\mathrm{d}\tau_{2}{\int}_{0}^{\tau_{2}} \mathrm{d}\tau_{1}\left[C_{\text{in}}{(\tau_{3}-\tau_{1})}C_{\text{in}}{(\tau_{2})}\right.\\ &&\left.+C_{\text{in}}{(\tau_{1})} C_{\text{in}}{(\tau_{3})}+C_{\text{in}}{(\tau_{1})}C_{\text{in}}{(\tau_{3}-\tau_{2})}\right]\\ &=&12\epsilon^{4} g(n) h(n). \end{array} $$
(117)

1.2.6 Coefficient of variation and serial correlation coefficient

The coefficient of variation is given by the ratio of standard deviation and mean of the 1st-order interval (i.e. ISI). If we take into account that we measure time in units of the mean ISI, we find

$$ C\mathrm{v}=\frac{\sqrt{\langle {\Delta} {t_{1}^{2}}\rangle}}{\langle t_{1}\rangle}=\sqrt{\kappa_{2,1}}, $$
(118)

where κ 2,1 is given by Eq. (116) with n=1. Furthermore, we can compute the SCC by using formula (116) for arbitrary n:

$$ \rho_{n}=\frac{\langle{\Delta} t_{n+1}^{2}\rangle-2\langle {\Delta} {t_{n}^{2}}\rangle+\langle{\Delta} t_{n-1}^{2}\rangle}{2\langle{\Delta} {t_{1}^{2}}\rangle}. $$
(119)

If we further restrict ourselves to the leading order, we can derive a much simplified expression for the SCC in terms of the noise correlation function. In fact, writing \(\langle {\Delta } {t_{n}^{2}}\rangle =2\epsilon ^{2}{{\int }_{0}^{n}}\mathrm {d}\tau _{2}\,{\int }_{0}^{\tau _{2}}\mathrm {d}\tau _{1}\,C_{\text {in}}{(\tau _{1})}+\mathcal {O}(\epsilon ^{4})\) and accounting for the symmetry of C in(τ), we obtain from Eq. (119)

$$ \rho_{n}=\frac{{{\int}_{0}^{1}}\mathrm{d}{\tau{\int}_{0}^{1}}\mathrm{d}\tau^{\prime}\,C_{\text{in}}{(n+\tau-\tau^{\prime}})}{{{\int}_{0}^{1}}\mathrm{d}{\tau{\int}_{0}^{1}}\mathrm{d}\tau^{\prime}\,C_{\text{in}}{(\tau-\tau^{\prime}})}+\mathcal{O}(\epsilon^{2}). $$
(120)

In terms of the noise power spectrum, Eq. (70), this expression can be rewritten as

$$ \rho_{n}=\frac{\int\mathrm{d}f\,S_{\text{in}}(f)\text{sinc}^{2}(f)e^{-2\pi if n}}{\int\mathrm{d}f\,S_{\text{in}}(f)\text{sinc}^{2}(f)}+\mathcal{O}(\epsilon^{2}). $$
(121)

To elucidate the general structure of serial correlations, it is instructive to express the SCC in terms of the drift matrix A and covariance matrix Σ of the OUP. In leading order of 𝜖, the variance of the nth-order intervals reads

$$\langle{\Delta} {t_{n}^{2}}\rangle=2\epsilon^{2}\left(\mathbf{b}^{\text{T}}\varSigma(\text{A}^{\text{T}})^{-2}\left[e^{n\text{A}^{\text{T}}} -n\text{A}^{\text{T}}-\mathbb{I}_{d}\right]\mathbf{b}\right)+\mathcal{O}(\epsilon^{4}), $$

where we assume that the drift matrix A is invertible. Setting n=1, we obtain the squared coefficient of variation:

$$ C_{\text{V}}^{2}=2\epsilon^{2}\mathbf{b}^{\text{T}}\varSigma(\text{A}^{\text{T}})^{-2}\left[e^{\text{A}^{\text{T}}} -\text{A}^{\text{T}}-\mathbb{I}_{d}\right]\mathbf{b}+ \mathcal{O}(\epsilon^{4}). $$
(122)

Using Eq. (119), the leading order of the SCC can now be written as

$$ \rho_{n}=\frac{\mathbf{b}^{\text{T}}\varSigma(\text{A}^{\text{T}})^{-2} \left(\mathbb{I}_{d}-e^{\text{A}^{\text{T}}}\right)^{2}e^{(n-1)\text{A}^{\text{T}}}\mathbf{b}}{{2}\mathbf{b}^{\text{T}} \varSigma(\text{A}^{\text{T}})^{-2} \left[e^{\text{A}^{\text{T}}} -\text{A}^{\text{T}}-\mathbb{I}_{d}\right]\mathbf{b}}+\mathcal{O}(\epsilon^{2}). $$
(123)

Apparently, the SCC has the simple exponential structure \(\rho _{n}=\mathbf {c}_{1}^{\mathrm {T}} e^{(n-1)\text {A}^{\text {T}}}\mathbf {c}_{2}\) with constant vectors c 1 and c 2. That means that the SCC is a linear combination of the entries in the matrix exponential function \(e^{(n-1)\text {A}^{\text {T}}}\). In fact, let Q be the constant matrix made up by the (generalized) eigenvectors of AT. Then, AT can be diagonalized (or, or more generally, transformed into its Jordan normal form) through the representation AT=QJQ−1. Thus, the SCC can be rewritten as

$$ \rho_{n}=\mathbf{c}_{1}^{\mathrm{T}}\mathrm{Q}e^{(n-1)\mathrm{J}}\mathrm{Q}^{-1}\mathbf{c}_{2}=\mathbf{d}_{1}^{\mathrm{T}} e^{(n-1)\mathrm{J}}\mathbf{d}_{2}, $$
(124)

where the vectors d 1=Qc 1 and \(\mathbf {d}_{2}=\mathrm {Q}^{-1}\mathbf {c}_{2}\) are constant. In the special case, where AT is diagonalizable (i.e. AT possesses d linearly-independent eigenvectors corresponding to the eigenvalues λ i , \(i=1,\dotsc ,d\)), the matrix exponential e (n−1)J is diagonal with the diagonal elements \(e^{\lambda _{i}(n-1)}\). Thus, the SCC is simply a superposition of the exponentials \(e^{\lambda _{i}(n-1)}\). Real eigenvalues contribute to exponential decays with “rate” 1/λ i . On the other hand, complex eigenvalues lead to damped oscillations, which, however, might appear rather irregular as \(e^{\lambda _{i}(n-1)}\) is sampled at discrete integer values (Bauermeister et al. 2013). In any case, the SCC decays to zero for \(n\rightarrow \infty \) because, by assumption, the real parts of the eigenvalues of the drift matrix are negative.

1.2.7 Cumulative correlations and Fano factor

The exponential structure of the SCC, Eq. (123), also allows us to calculate the cumulative effect of serial correlation as expressed by the sum of all SCCs. Applying the formula for the geometric series, \({\sum }_{n=1}^{\infty } e^{(n-1)\text {A}^{\text {T}}}=\left (\mathbb {I}_{d}-e^{\text {A}^{\text {T}}}\right )^{-1}\), we find

$$ {\sum}_{n=1}^{\infty}\rho_{n}=\frac{ \mathbf{b}^{\text{T}}\varSigma(\text{A}^{\text{T}})^{-2} \left(\mathbb{I}_{d}-e^{\text{A}^{\text{T}}}\right)\mathbf{b}} {2\mathbf{b}^{\text{T}} \varSigma(\text{A}^{\text{T}})^{-2} \left[e^{\text{A}^{\text{T}}} -\text{A}^{\text{T}}-\mathbb{I}_{d}\right]\mathbf{b}}+\mathcal{O}(\epsilon^{2}). $$
(125)

This sum also determines the long-term variability of the spike train. Indeed, the fundamental relation (Cox and Lewis 1966a)

$$ \lim_{t\rightarrow\infty}F(t)=\langle T\rangle\lim_{f\rightarrow 0} S_{\text{out}}(f)=C_{\text{V}}^{2}\left(1+2{\sum}_{n=1}^{\infty}\rho_{n}\right) $$
(126)

relates the Fano factor F(t) of the spike count or the power spectrum S out(f) of the output spike train to the second order ISI statistics as expressed by the CV and the sum of SCCs. The Fano factor is defined as the variance to mean ratio of the spike count N(t) in the time window (0,t), i.e. F(t)=〈ΔN(t)2〉/〈N(t)〉 (note that, in this definition, the origin of the time window is arbitrary; it does not have to coincide with a spike time). The power spectrum of the output spike train \(x_{\text {out}}(t)={\sum }_{i}\delta (t-t_{i})\) is defined as the Fourier transform of the spike train auto-correlation function, i.e. \(S_{\text {out}}(f)=\int \mathrm {d}\tau \,e^{2\pi if\tau }\langle x_{\text {out}}(t)x_{\text {out}}(t+\tau )\rangle \). Using the expression for the squared CV, Eq. (122), and the summed SCC, Eq. (125), the long-time limit of the Fano factor is

$$ \lim_{t\rightarrow\infty} F(t)=-2\epsilon^{2}\mathbf{b}^{\text{T}}\varSigma(\mathrm{A}^{\mathrm{T}})^{-1}\mathbf{b}+\mathcal{O}(\epsilon^{4}). $$
(127)

In order to obtain an approximation for the Fano factor for arbitrary length of the time window t, we follow (Middleton et al. 2003): If t is large and 𝜖≪1, the spike count N(t) can be approximated by the free membrane potential V(t) with V(0)∈[0,1). Indeed, for weak noise we can assume \(\dot V(t)>0\), so that the n-th spike time t n is equivalent to the crossing of the n-fold threshold, i.e. V(t n )=n. This implies that the deviation δ V(t)=V(t)−N(t) is bounded (in fact, 0≤δ V(t)<1) and hence, \(N(t)=V(t)+\mathcal {O}(1)\). This relation still holds if we assume the specific initial condition V(0)=0. The trajectory of V(t) can then be obtained by integrating Eq. (37a):

$$ V(t)=t+{\epsilon{\int}_{0}^{t}}\mathrm{d}t^{\prime}\,\eta(t^{\prime}). $$
(128)

Thus, the mean and variance are

$$\begin{array}{@{}rcl@{}} \langle V(t)\rangle&=&t,\\ \langle{\Delta} V(t)^{2}\rangle&=&\epsilon^{2}{{\int}_{0}^{t}}\mathrm{d}t^{\prime}{{\int}_{0}^{t}}\mathrm{d} t^{\prime\prime}\,C_{\text{in}}({t^{\prime}-t^{\prime\prime}})=2\epsilon^{2} h(t), \end{array} $$

where h(t) has been defined in Eq. (66). For large times, the Fano factor is hence given by

$$\begin{array}{@{}rcl@{}} F_{\text{large}}(t)&=&\frac{\langle{\Delta} V(t)^{2}\rangle}{\langle V(t)\rangle}+\mathcal{O}(t^{-1})\approx 2\epsilon^{2}\frac{ h(t)}{t}\\ &=&2\epsilon^{2}\mathbf{b}^{\text{T}}\varSigma\left(\frac{(\mathrm{A}^{\mathrm{T}})^{-2}\left[e^{t\text{A}^{\text{T}}} \,-\,t\text{A}^{\text{T}}\,-\,\mathbb{I}_{d}\right]}{t}\right)\mathbf{b}. \end{array} $$
(129)

In the exact limit \(t\rightarrow \infty \) we re-obtain the leading-order term of Eq. (127), which shows the consistency of the two approaches.

For small time windows, the spike train can be regarded as periodic given the weakness of the noise. This argument holds irrespective of the specific correlation function of the noise, which allows us to apply the same approximation as in (Middleton et al. 2003):

$$ F_{\text{small}}(t)=\frac{\{t\}(1-\{t\})}{t}, $$
(130)

where {t}=t−⌊t⌋ is the difference between t and the largest integer ⌊t⌋ not exceeding t (sawtooth function). Following (Middleton et al. 2003), a uniform approximation on the whole range of t can be obtained by adding the small and large time approximations of the Fano factor:

$$ F(t)\approx F_{\text{small}}(t)+F_{\text{large}}(t). $$
(131)

1.3 A.3 Model of the synaptic input current

To test our theory (Figs. 39), we assume that the pool of presynaptic neurons can be split up into \(N_{\mathcal {E}}\) excitatory and \(N_{\mathcal {I}}=N-N_{\mathcal {E}}\) inhibitory neurons. Furthermore, the postsynaptic synaptic currents induced by each spike are modeled by the exponential impulse response functions \(j_{\mathcal {E}}(t)=J_{\mathcal {E}} e^{-t/\tau _{\mathcal {E}}}\theta (t)\) and \(j_{\mathcal {I}}(t)=J_{\mathcal {I}} e^{-t/\tau _{\mathcal {I}}}\theta (t)\) with synaptic time constants \(\tau _{\mathcal {E}} \) and \(\tau _{\mathcal {I}}\) (synaptic filtering). This corresponds to the simple first order kinetics

$$ \tau_{i}\dot I_{i}=-I_{i}+J_{i}\tau_{i}{\sum}_{k=1}^{N_{i}}x_{i,k}(t),\qquad i=\mathcal{E},\mathcal{I} $$
(132)

where \(x_{\mathcal {E}/\mathcal {I},k}(t)={\sum }_{j}\delta \left (t-t_{j}^{(\mathcal {E}/\mathcal {I},k)}\right )\) is the spike train arriving at the k-th excitatory/inhibitory synapse. \(J_{\mathcal {E}}\) and \(J_{\mathcal {I}}\) denote the synaptic efficacies. The mean synaptic current is thus \( \langle I_{\text {syn}}(t)\rangle =N_{\mathcal {E}}J_{\mathcal {E}}\tau _{\mathcal {E}}\nu _{\mathcal {E}}+N_{\mathcal {I}}J_{\mathcal {I}}\tau _{\mathcal {I}}\nu _{\mathcal {I}}\), where we assumed that excitatory and inhibitory neurons fire with rate \(\nu _{\mathcal {E}}\) and \(\nu _{\mathcal {I}}\), respectively.

In frequency domain, the synaptic filter becomes

$$ |\tilde{j}_{i}(f)|^{2}=\frac{{J_{i}^{2}}{\tau_{i}^{2}}}{1+(2\pi\tau_{i}f)^{2}} $$
(133)

for \(i=\mathcal {E},\mathcal {I}\). In the special case, where the single presynaptic spike trains are independent and have power spectra \(S_{\mathcal {E}}(f)\) and \(S_{\mathcal {I}}(f)\), respectively, we find the power spectrum of the synaptic current from Eq. (27):

$$ S_{I}(f)=N_{\mathcal{E}}|\tilde{j}_{\mathcal{E}}(f)|^{2}S_{\mathcal{E}}(f)+N_{\mathcal{I}}| \tilde{j}_{\mathcal{I}}(f)|^{2}S_{\mathcal{I}}(f). $$
(134)

1.4 A.4 Signal encoded in a common rate modulation of Poissonian inputs

To model the encoding of a continuous signal s(t), we assume that s(t) represents the rate modulation of an inhomogeneous Poisson process. Specifically, we assume that the k-th presynaptic spike train is a Poisson process with rate

$$r_{k}(t)\equiv\langle x_{k}(t)\rangle=\nu_{k}(1+\varepsilon_{k} s(t)). $$

Here, s(t) is common to all inputs and is modeled as a mean-zero Gaussian process with power spectrum S s (f). For the correlation matrix, Eq. (25), we find

$$S_{kl}(f)=\nu_{k}\delta_{kl}+\varepsilon_{k}\varepsilon_{l}\nu_{k}\nu_{l}S_{s}(f). $$

The power spectrum S I (f) of the total input current is then given by Eq. (27), and the normalized spectrum is \(S_{\text {in}}(f)=S_{I}(f)/\int \mathrm {d}f\,S_{I}(f)\). Specifically, we find for homogeneous excitatory and inhibitory neurons, where only the excitatory population carries a signal (\(\varepsilon _{\mathcal {E}}=\varepsilon >0\), \(\varepsilon _{\mathcal {I}}=0\)):

$$\begin{array}{@{}rcl@{}} \langle I_{\text{syn}}\rangle&=&N_{\mathcal{E}}J_{\mathcal{E}}\tau_{\mathcal{E}}\nu_{\mathcal{E}}+N_{\mathcal{I}} J_{\mathcal{I}}\tau_{\mathcal{I}}\nu_{\mathcal{I}}\\ S_{I}(f)&=&N_{\mathcal{E}}\nu_{\mathcal{E}}|\tilde j_{\mathcal{E}}(f)|^{2}+N_{\mathcal{I}}\nu_{\mathcal{I}}|\tilde j_{\mathcal{I}}(f)|^{2}\\ &&+\varepsilon^{2}\nu_{\mathcal{E}}^{2} N_{\mathcal{E}}^{2}|\tilde j_{\mathcal{E}}(f)|^{2}S_{s}(f). \end{array} $$
(135)

A signal spectrum with a powerlaw decay can be modeled by Eq. (15) (cf. (Sobie et al. 2011)), where \(\mathcal {N}=\frac {1}{2}(1-\gamma )/\left (f_{2}^{1-\gamma }-\gamma f_{1}^{1-\gamma }\right )\) is a normalization constant ensuring unit variance, i.e. \(\int \mathrm {d}f\,S_{s}(f)=1\). The steepness of the decay is quantified by the exponent γ. For γ=0, we recover a flat spectrum, i.e. white noise with a cut-off frequency f 2. For γ=1, we obtain 1/f-noise (flicker noise), which possesses strongly increased power at low frequencies. For the sake of the following calculations, we restrict γ to the range [0,1). Furthermore, we assume the lower cut-off to be small, f 1≪1. Note that in the limit \(f_{1}\rightarrow 0\), we have to require γ<1 in order to have a signal with a finite-variance. Taking this limit allows us to analytically derive the statistical properties of the spike train on long times t in the range \(1\ll t\ll f_{1}^{-1}\). Nevertheless, in simulations we keep a small lower cut-off f 1>0 in order to resolve the low-frequency components.

We are interested in the asymptotic behaviors

$$\begin{array}{@{}rcl@{}} F(t)&=&\epsilon^{2}t\int\mathrm{d}f\,S_{\text{in}}(f)\text{sinc}^{2}(ft) \end{array} $$
(136)
$$\begin{array}{@{}rcl@{}} \rho_{n}&\propto&\int\mathrm{d}f\,S_{\text{in}}(f)\text{sinc}^{2}(f)e^{-2\pi ifn} \end{array} $$
(137)

for \(1\ll t\ll f_{1}^{-1}\), and \(1\ll n\ll f_{1}^{-1}\), respectively. We will exploit the property that sinc2(x) is negligibly small for x>1. Thus, in Eq. (136) only frequencies f<1 contribute to the integral. Similarly, we can exploit the fact that for large n, the term e −2πifn oscillates rapidly for f>1. Thus, only small frequencies f≪1 contribute to the integral in Eq. (137). This allows us to make the following approximations: First, if f 2>1, we can replace the upper cut-off frequency by infinity, i.e. \(f_{2}\rightarrow \infty \), without changing the integrals in Eqs. (136) and (137). Secondly, if we assume a time scale separation between spiking and fast synaptic filtering dynamics, \(\tau _{\mathcal {E}},\tau _{\mathcal {I}}\ll 1\), we can replace the synaptic filter by

$$|\tilde j_{k}(f)|^{2}\equiv\frac{{J_{k}^{2}}{\tau_{k}^{2}}}{1+(2\pi\tau_{k}f)^{2}}\approx {J_{k}^{2}}{\tau_{k}^{2}},\qquad k=\mathcal{E},\mathcal{I},\quad f\!<\!1 $$

because only frequencies f<1 matter in the above integrals. From Eq. (135) then follows that the normalized spectrum has the form

$$ S_{\text{in}}(f)=\frac{\sigma_{s}^{2}S_{s}(f)+S_{0}}{\sigma_{I}^{2}} $$
(138)

with \(\sigma _{s}=\varepsilon _{\mathcal {E}}J_{\mathcal {E}}\tau _{\mathcal {E}}N_{\mathcal {E}}\nu _{\mathcal {E}}\) and \(S_{0}={\sum }_{k=\mathcal {E},\mathcal {I}}{J_{k}^{2}}{\tau _{k}^{2}}N_{k}\nu _{k}\). Using Eqs. (136) and (138), we find for the Fano factor

$$\begin{array}{@{}rcl@{}} &&{} F(t)=\frac{2\epsilon^{2}{\sigma_{s}^{2}}t}{{\sigma_{I}^{2}}}{\int}_{0}^{\infty} S_{s}(f)\text{sinc}^{2}(ft)\,\mathrm{d}f+\epsilon^{2}\frac{S_{0}}{{\sigma_{I}^{2}}}\\ &&{}\approx \frac{2\epsilon^{2}{\sigma_{s}^{2}}t\mathcal{N}}{{\sigma_{I}^{2}}}\left(\int\limits_{f_{1}}^{\infty} \frac{\text{sinc}^{2}(ft)}{f^{\gamma}}\,\mathrm{d}f+\frac{{\int}_{0}^{f_{1}}\text{sinc}^{2}(ft)\,\mathrm{d}f}{f_{1}^{\gamma}}\right)+\epsilon^{2}\frac{S_{0}}{{\sigma_{I}^{2}}}\\ &&{} \simeq \frac{2\epsilon^{2}{\sigma_{s}^{2}}\mathcal{N}}{{\sigma_{I}^{2}}}t^{\gamma}{\int}_{f_{1} t}^{\infty} x^{-\gamma}\text{sinc}^{2}(x)\,\mathrm{d}x+\mathcal{O}(1),\qquad t\gg 1. \end{array} $$

If the lower cut-off frequency f 1 is still small enough such that f 1 t≪1, we can set the lower integration boundary to zero. This yields

$$ F(t)\simeq\frac{2\epsilon^{2}{\sigma_{s}^{2}}\mathcal{N}\mathcal{C}_{1}}{{\sigma_{I}^{2}}}t^{\gamma}+\mathcal{O}(1),\qquad 1\ll t\ll 1/f_{1}, $$
(139)

where \(\mathcal {C}_{1}=2(2\pi )^{\gamma -1}{\Gamma }(-1-\gamma )\sin \left (\frac {\pi \gamma }{2}\right )\).

To get the behavior of the SCC for large lags n, we can again set f 1 to zero. Ignoring prefactors we find

$$\begin{array}{@{}rcl@{}} \rho_{n}&\propto&{\int}_{0}^{\infty}\mathrm{d}f\,\left[{\sigma_{s}^{2}}S_{s}(f)+S_{0}\right]\text{sinc}^{2}(f)\cos(2\pi fn)\\ &\propto& {\int}_{0}^{\infty}\mathrm{d}f\, f^{-\gamma}\text{sinc}^{2}(f)\cos(2\pi fn)\\ &\propto& (n+1)^{1+\gamma}-2n^{1+\gamma}+(n-1)^{1+\gamma} \end{array} $$

For large n, we can approximate \((n\pm 1)^{1+\gamma }\approx n^{1+\gamma }\pm (1+\gamma )n^{\gamma }+\frac {\gamma (1+\gamma )}{2}n^{\gamma -1}\), hence

$$\rho_{n}\propto n^{\gamma-1},\qquad 1\ll n\ll f_{1}^{-1}. $$

1.5 A.5 Inverse Gaussian input

To model non-Poissonian input spike trains with a CV smaller or larger than unity, we assume presynaptic spike trains to be renewal processes with an inverse Gaussian ISI density:

$$ P(T;\nu,C_{\mathrm{V}})=\frac{1}{\sqrt{2\pi C_{\mathrm{V}}^{2}\nu T^{3}}}\exp\left[-\frac{(\nu T-1)^{2}}{2C_{\mathrm{V}}^{2} \nu T}\right]. $$
(140)

This distribution is parametrized by the rate ν and the coefficient of variation C V. The spike train power spectrum is given by (Stratonovich 1967)

$$S_{\text{IG}}(f;\nu,C_{\mathrm{V}})=\frac{1-|\tilde P(f;\nu,C_{\mathrm{V}})|^{2}}{|1-\tilde P(f;\nu,C_{\mathrm{V}})|^{2}}, $$

where \(\tilde P(f;\nu ,C_{\mathrm {V}})\) is the Fourier transform of the inverse Gaussian, Eq. (140), given by

$$\tilde{P}(f;\nu,C_{\mathrm{V}})=\exp\left[\frac{1-\sqrt{1-4\pi i C_{\mathrm{V}}^{2}f/\nu}}{C_{\mathrm{V}}^{2}}\right] $$

(Holden 1976; Bulsara et al. 1994; Chacron et al. 2005). The power spectrum of the synaptic current is then, according to Eq. (27):

$$\begin{array}{@{}rcl@{}} S_{I}(f)=N_{\mathcal{E}}|\tilde j_{\mathcal{E}}(f)|^{2}S_{\text{IG}}(f;\nu_{\mathcal{E}},C_{\mathrm{V}})\\ +N_{\mathcal{I}}|\tilde j_{\mathcal{I}}(f)|^{2}S_{\text{IG}}(f;\nu_{\mathcal{I}}, C_{\mathrm{V}}). \end{array} $$

1.6 A.6 Synapses with short-term plasticity

Following (Dittman et al. 2000; Merkel and Lindner 2010), we use a deterministic model for the synaptic dynamics. A presynaptic spike at the th synapse causes a postynaptic response with the history-dependent weight A (t)=F (t)D (t), where facilitation is governed by

$$\begin{array}{@{}rcl@{}} F_{\ell}(t) &=& F_{0} + ((1 - F_{0})^{-1} + F_{C,\ell}^{-1}(t) )^{-1}, \\ \dot{F}_{C,\ell}(t) &=& - F_{C,\ell}(t)/\tau_{F} + {\Delta} \cdot x_{\ell}(t), \end{array} $$

and where depression dynamics obey

$$\dot{D}_{\ell}(t) = (1 - D_{\ell}(t))/\tau_{D} - F_{\ell}(t^{-}) D_{\ell}(t^{-}) \cdot x_{\ell}(t). $$

Here, x (t) is the presynaptic spike train, F 0 is the intrinsic release probability, Δ quantifies the strength of facilitation and τ F (τ D ) is the timescale of facilitation (depression). Above, we have compared the limit cases of pure facilitation and pure depression, which corresponds to setting D (t)≡1 (pure facilitation) or F (t)≡F 0 (pure depression).

1.7 A.7 Nonlinear integrate-and-fire models

Using the PRC, Eq. (18), the deviation of the ith ISI from its mean is given by

$$\delta T_{i}\equiv T_{i}-\langle T\rangle=-\sigma{\int}_{0}^{\langle T\rangle}\mathrm{d}t\,Z(t)\eta(t_{i-1}+t). $$

This allows us to compute the serial correlations:

$$\begin{array}{@{}rcl@{}} &&{} \langle \delta T_{i}\delta T_{i+k}\rangle\\ &&{} =\sigma^{2}{\int}_{0}^{\langle T\rangle}\mathrm{d}t{\int}_{0}^{\langle T\rangle}\mathrm{d}t^{\prime}\,Z(t)Z(t^{\prime})\langle\eta(t_{i-1}\,+\,t)\eta(t_{i+k-1}\,+\,t^{\prime})\rangle\\ &&{} =\!\sigma^{2}{\int}_{0}^{\langle T\rangle}\mathrm{d}t{\int}_{0}^{\langle T\rangle}\!\mathrm{d}t^{\prime}\,Z(t)Z(t^{\prime})C_{\text{in}}{(t_{i+k-1}\,-\,t_{i-1}+t^{\prime}-t)}\\ &&{} =\sigma^{2}{\int}_{0}^{\langle T\rangle}\mathrm{d}t{\int}_{0}^{\langle T\rangle}\mathrm{d}t^{\prime}\,Z(t)Z(t^{\prime})C_{\text{in}}{(t_{k}+t^{\prime}-t)}. \end{array} $$

If C in(t) does not change rapidly, we can approximate t k kT〉 in the last equation. Hence

$$\begin{array}{@{}rcl@{}} &&\langle \delta T_{i}\delta T_{i+k}\rangle\\ &&\approx\sigma^{2}{\int}_{0}^{\langle T\rangle}\mathrm{d}t{\int}_{0}^{\langle T\rangle}\mathrm{d}t^{\prime}\,Z(t)Z(t^{\prime})C_{\text{in}}{(k{\langle T\rangle}+t^{\prime}-t)}\\ &&=\sigma^{2}\langle T\rangle^{2}\int\mathrm{d}f\,S_{\text{in}}(f)|\tilde Z(f)|^{2}e^{-2\pi if{\langle T\rangle}k}, \end{array} $$

where \(\tilde Z(f)=\frac {1}{\langle T\rangle }{\int }_{0}^{\langle T\rangle }\mathrm {d}t\,Z(t)e^{2\pi i ft}\). From this follow Eqs. (19) and (20).

The sum of all SCCs evaluates to

$${\sum}_{n=1}^{\infty}\rho_{n}=\frac{1}{2}\left(\frac{\sigma^{2}\tilde Z(0)^{2}}{C_{\mathrm{V}}^{2}\langle T\rangle}S(0)-1\right), $$

where \(\tilde Z(0)\) is equal to the average PRC obtained by averaging over the interval [0,〈T〉]. Furthermore, using Eq. (126) we obtain the long-time limit of the Fano factor or, equivalently, the spike train power spectrum in the zero-frequency limit as follows

$$\lim_{t\rightarrow\infty}F(t)=\langle T\rangle\lim_{f\rightarrow 0} S_{\text{out}}(f)=\frac{\sigma^{2}\tilde Z(0)^{2}}{\langle T\rangle}S(0). $$

This equation implies that for any neural oscillator driven by weak colored noise, the spike count diffusion coefficient D eff=S out(0)/2 is proportional to the intensity D η =S(0)/2 of the input noise, irrespective of the nonlinearity.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schwalger, T., Droste, F. & Lindner, B. Statistical structure of neural spiking under non-Poissonian or other non-white stimulation. J Comput Neurosci 39, 29–51 (2015). https://doi.org/10.1007/s10827-015-0560-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-015-0560-x

Keywords

Navigation