Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons

Electrophysiological recordings in cortex in vivo have revealed a rich variety of dynamical regimes ranging from irregular asynchronous states to a diversity of synchronized states, depending on species, anesthesia, and external stimulation. The average population firing rate in these states is typically low. We study analytically and numerically a network of sparsely connected excitatory and inhibitory integrate-and-fire neurons in the inhibition-dominated, low firing rate regime. For sufficiently high values of the external input, the network exhibits an asynchronous low firing frequency state (L). Depending on synaptic time constants, we show that two scenarios may occur when external inputs are decreased: (1) the L state can destabilize through a Hopf bifucation as the external input is decreased, leading to synchronized oscillations spanning d δ to β frequencies; (2) the network can reach a bistable region, between the low firing frequency network state (L) and a quiescent one (Q). Adding an adaptation current to excitatory neurons leads to spontaneous alternations between L and Q states, similar to experimental observations on UP and DOWN states alternations.


A. Solving numerically mean-field equations
To solve numerically the mean-field equations, we introduce ODEs whose fixed points satisfy Eqs. (6-9), (1) τ IνI (t) = −ν I (t) + Φ (µ I (t), σ I (t)) ( where µ a s and σ a s are functions of ν a (t)s through Eqs. (8)(9). To get convergence to fixed points, we can use τ I τ E . Typically we use τ I = 0.1τ E . We emphasize that these equations do not describe the real dynamics of the system (which are given by the full system of Fokker-Planck equations, Eq. (12); it is only a numerically convenient tool to find the fixed points of the mean-field equations.

B. Agreement between spiking simulations and mean field theory
While all the features described by the mean-field approach are reproduced qualitatively by numerical simulations, there are significant quantitative discrepancies between both, as  Figure 1: Average excitatory and inhibitory population firing frequency as a function of ν X , for two values of the E to I synaptic efficacies: J IE = 0.34mV (A.); J IE = 0.32mV (B.); for both panels, J EE = 0.2mV, g = 3, D EE = 20ms, D II = 10ms. Each simulation point is obtained by averaging the population firing frequency over T = 10s. C., D. Average excitatory and inhibitory population firing frequency as a function of g for two values of the synaptic efficacies: J IE = 0.34mV (C.); J IE = 0.32mV (D.); for both panels, J EE = 0.2mV, ν X = 3.8 (i.e. ν X = 0.76ν θ ), D EE = 20ms, D II = 10ms. The error bars indicate the standard error of the temporal mean. shown in Fig. 1. We therefore investigated the source of this discrepancy. A candidate for this discrepancy is the finite amplitude of the synaptic couplings, that potentially lead to deviations from the diffusion approximation, which allows to treat synaptic currents as Gaussian noise (Eq. 3). To understand whether deviations from the diffusion approximation are the source of the observed discrepancies, we replaced the classic Siegert-Ricciardi transfer function (Eq. 7) with the recently obtained transfer function of LIF neurons receiving a combination of excitatory and inhibitory shot noise processes with exponentially distributed amplitudes (Richardson and Swarbrick 2010). Richardson and Swarbrick (2010) computed the firing rate of a neuron receiving shot noise from pre-synaptic E and I afferents with exponentially distributed amplitudes J aE and J aI , The modified theory is then compared to numerical simulations of a network which is identical to the one described in the section "Spiking network model" except for two elements: first, an exponential distribution of synaptic efficacies is implemented; second, the external synaptic current received by a neuron i is no longer given by Eq. 3 as in the standard diffusion approximation, but is rather generated through a Poisson process, in which C ext excitatory neurons send to neuron i synaptic pulses at rate ν X and with exponentially distributed EPSP amplitudes. Fig. 2 shows the much better agreement between theory and simulations that can be obtained with such modifications, for the same set of parameters in Fig. 1. This shows that the main factor explaining the discrepancies between theory and simulations in Fig. 1 are the deviations from the diffusion approximation due to the finite size of post-synaptic potentials.

C. Linear stability analysis of the UP-state
Linear stability analysis of networks of LIF neurons with white noise inputs have been described in multiple previous studies of sparsely connected (Brunel and Hakim 1999;Brunel 2000) as well as fully connected networks (Brunel and Hansel 2006;Ledoux and Brunel 2011;Barbieri et al. 2014). Here, we briefly recapitulate the general formalism, and re-express the eigenvalue equation that originally appeared in (Brunel 2000) in terms of neuronal linear response functions. This leads to equations similar to (Ledoux and Brunel 2011), but with an additional contribution of the linear response functions to changes in variance (Lindner and Schimansky-Geier 2001).
The Fokker-Planck equations in the interval (−∞, θ) are with where averages are taken over an exponential distribution of synaptic times and a, b ∈ {E, I} for excitatory and inhibitory neurons. The boundary conditions are The stationary solutions P a0 (V ) are given by with and It is convenient to use reduced variables for the potential for the probability distribution for the rates ν a = ν a0 (1 + n a (t)), and for various constants The next step is to expand the Fokker-Planck equations about the stationary solutions Q a0 (y a ) Q a (y a , t) = Q a0 (y a ) + Q a1 (y a , t) + ...
The boundary conditions are eight linear equations that determine the eight coefficients α + a , α − a , β + a , β − a . Further conditions α − a = 0 are needed in order to obtain an integrableQ a (y a , λ), and can be derived from the solution of the system of the boundary conditions.
Eventually, one finds that the eigenvalues λ are solutions of the equation where describes the net effect of the instantaneous firing rate of population b on population a. The variable s b is defined as s E = −1, s I = 1. The first term relates to the refractory period τ rp R rp,a (λ) = U (y ar , λτ m,a )(1 − e −λτrp ) U (y aθ , λτ m,a ) − U (y ar , λτ m,a ) .
The second term in the r.h.s. of Eq. (29) describes the effect of this instantaneous firing rate through the mean synaptic inputs; it is proportional to the linear firing rate response function R µ,a (λ) = τ m,a ν b0 σ a0 (1 + λτ m,a ) ∂U ∂y (y aθ , λτ m,a ) − ∂U ∂y (y ar , λτ m,a ) U (y aθ , λτ m,a ) − U (y ar , λτ m,a ) , where U (y, λ) = e y 2 φ 2 (y, λ). The third term in the r.h.s. of Eq. (29) describes the effect of this instantaneous firing rate through the variance of synaptic inputs; it is proportional to another linear response function, λτ m,a + y aθ ∂U ∂y (y aθ , λτ m,a ) − y ar ∂U ∂y (y ar , λτ m,a ) U (y aθ , λτ m,a ) − U (y ar , λτ m,a ) .
Note that this term disappears in networks in which the synaptic couplings scale as 1/C, in the large C limit. To investigate the effect of variance dynamics on the instabilities of the asynchronous state, we set R σ 2 ,a = 0 in Eq. (29). Note that in the strong noise, low firing rate regime investigated in this paper R µ (f ) is a monotonically decreasing function of f , which is essentially constant below a cut-off frequency which is of order 1/τ m at low frequencies (Ostojic and Brunel 2011), and then decay asymptotically to zero as 1/ √ f (Brunel and Hakim 1999;Lindner and Schimansky-Geier 2001;Brunel et al. 2001). On the other hand, R σ 2 has a finite limit when f goes to infinity (Lindner and Schimansky-Geier 2001).

D. Treatment of firing rate adaptation
Following La Camera et al. (2004), we replace adaptation currents by their mean value βν E0 . The mean-field equations (6-9) are unaffected, except that the equation for µ E0 is replaced by

E. Spiking Network Model with exponentially decaying synaptic currents
As before, the total current arriving to a postsynaptic neuron is due to the activity of its local (pre-synaptic) afferents and to the current elicited by external inputs.
where I ext i (t) is the external input current and I rec i (t) is the total recurrent synaptic current. We now model the recurrent contribution to the post-synaptic current as the sum of contribution that are mediated by NMDA, AMPA and GABA receptors with distinct time constants. The total recurrent currents become where each of the synaptic currents is described by a separate differential equation where x E,I is the fraction of the charge coming from excitatory afferents which is mediated by NMDA receptors, which mediates a slow post-synaptic current. The remaining fraction of the charge (1 − x E,I ) is mediated by AMPA receptors and elicits a faster current. The current coming from inhibitory afferents is mediated by GABA receptors. Upon arrival of a pre-synaptic spike at time t k , the postsynaptic current instantaneously receives a "kick" proportional to the synaptic efficiency J (in mV), followed by an exponential decay with time constant τ syn , where τ syn = τ AM,N,G . In what follows we set x I , as well as the synaptic delays to zero.

F. Spiking Network Model with conductance-based synaptic currents
We simulated a network with conductance-based synapses in which the recurrent synaptic currents of neuron i in population a (= E, I) are given by where V E = 70mV is the excitatory reversal potential, V I = 0mV is the inhibitory reversal potential, and s E,I i,a (t) represent excitatory and inhibitory synaptic conductances (normalized by the leak conductance), which evolve in time according to: where τ b (b = E, I) is the decay time constant of E/I synaptic currents, and g ab is the (normalized) conductance of b synapses onto a neurons. The conductances were obtained by dividing the synaptic efficacies in the current-based network by the driving force at threshold, i.e. (V E − θ) for E synapses, (V I − θ) for I synapses, and by rescaling E synapses by an additional factor γ. Excitatory neurons were also endowed with an adaptation current described by Eqs. (10,11) in the main text.

G. Identification of transitions between up and down states in simulations
Up and down state transitions were identified using the same method described in Neske et al. (2015). In particular, both a fast and a slow exponential moving average were calculated over the excitatory population firing rate. The intersection points of the two moving averages determined onset and offset of a state transition.