Stability of synchronization under stochastic perturbations in leaky integrate and fire neural networks of finite size

We study the synchronization of fully-connected and totally excitatory integrate and fire neural networks in presence of Gaussian white noises. Using a large deviation principle, we prove the stability of the synchronized state under stochastic perturbations. Then, we give a lower bound on the probability of synchronization for networks which are not initially synchronized. This bound shows the robustness of the emergence of synchronization in presence of small stochastic perturbations.


Introduction
Neurons are the cells of the nervous system which are able to generate transmissible electrical signals called action potentials, or spikes for short, and to encode information in the sequences of time intervals separating them (inter-spikes intervals (ISI)).The information encoded in spikes sequences is transmitted and processed in the network formed by the neurons by means of synaptic interactions triggered by the spikes.These interactions may facilitate (excitatory interactions) or prevent (inhibitory interactions) the emission of spikes in postsynaptic neurons.Among the collective behaviors emerging from this coupling, the synchronization is one of those that has attracted much attention in experimental and theoretical works: a network is fully synchronized when all its neurons spike simultaneously.
Mathematical studies of synchronization have been carried out first in deterministic neural models.For instance, in Mirollo and Strogatz (1990), the authors prove that in fully-connected and totally excitatory leaky integrate and fire neural network models the synchronization occurs for almost all initial state.When weak interactions are assumed, a large class of models can be reduced to canonical systems of phase coupled oscillators Hoppensteadt and Izhikevich (1997); Izhikevich (1999a).This formalism allows to study the existence and the stability of synchronized solutions in weakly coupled general networks Izhikevich (1999b), including networks containing inhibitory synapses and considering synaptic delays Van Vreeswijk (1996); Van Vreeswijk et al. (1994).Synchronized solutions and more generally phase-locked solutions in integrate and fire networks with delays have also been analyzed in the case of strong or arbitrary intensity coupling, see for instance Bressloff andCoombes (1999, 2000).
Successive in vivo recordings of the membrane potential of a neuron show the existence of a large variability in its ISI.Although this variability could be the result of the complex encoding of information in the neural network, it nevertheless suggests the presence of noise in the neural activity.As a matter of fact, the sources of noise are multiple.We can mention thermal noise, channel noise, synaptic noise, and the noise resulting of a background neural activity.Noise is commonly modeled by a Gaussian stochastic process: an additive white noise is considered in the dynamics of the membrane potential.Gaussian noise models a weak interaction between the neurons of a sub-network and those of the sequel of the network.This can be derived rigorously by assuming balanced excitation and inhibition outside of the sub-network, and a Poisson statistics for the times of the presynaptic spikes Gerstner and Kistler (2002).
The mathematical analysis and the problem of efficient numerical simulations of neural networks with Gaussian noise can be tackled when considering simplified models such as (leaky) integrate and fire models.To this class of models it is possible to associate a discrete time Markov chain containing all the informations to study the spiking times Touboul and Faugeras (2011).This approach allows to deduce qualitative properties of the network and also to build efficient algorithm to simulate exactly the ISI.Nevertheless, the dimension of the Markov chain depends on the characteristics of the network and in most cases a good knowledge of the transition distribution of the chain is needed.
For instance, for Perfect Integrate and Fire (PIF) or Leaky Integrate and Fire (LIF) network models with inhibitory interactions, the countdown sequence, which after any spiking time gives for each neuron the remaining time until it emits a spike supposing no interaction meanwhile, is a Markov chain.For networks with excitatory interactions, to keep the Markov property of the countdown process it is necessary to change the interaction rules Turova et al. (1994) (taking into account the last presynaptic spike and forgetting the previous ones).Instead of changing the interaction rules, another way to have the Markov property is to consider a chain of higher dimension composed by the pair of the countdown process and the membrane potential of each neuron at the spiking times Touboul and Faugeras (2011).Unfortunately, there is no explicit expressions for the transition probabilities of this chain, even for simple models as LIF networks.The inefficiency of the Markov chain approach makes difficult the study and simulation of neural networks with excitatory synapses.
The mean-field limit of such networks when the number of neurons goes to infinity has also been studied Delarue et al. (2015b,a).It exhibits another kind of synchronization, defined by the fact that the number of neurons which spike simultaneously is a macroscopic proportion of the full network (i.e ηN, where N is the number of neurons and 0 < η ≤ 1).In this context, the size of the interactions has to decrease with the number of neurons: they are of order 1/N.
In the present paper, we study the complete synchronization for finite networks, which can be considered as noisy versions of the model of Mirollo and Strogatz (1990), that is, fully-connected and totally excitatory integrate and fire neural network with noise.Using a large deviation principle, we show that, if the network is synchronized, it remains synchronized with large probability, provided the random perturbations are small enough.In a second time, we study the emergence of synchronization in a network which is not initially synchronized.The proof developed in Mirollo and Strogatz (1990) cannot be easily adapted, since it relies on the preservation of an order which is destroyed in presence of noise.Therefore, to obtain an estimation of the probability that the synchronization occurs before the n-th firing of the network, we propose an alternative strategy to those of Mirollo and Strogatz (1990); Turova et al. (1994), which use some arguments developed for deterministic networks in Catsigeras and Guiraud (2013).This estimation is obtained supposing a specific relation between intensity of interaction and the number of neurons.These results show that both the emergence of synchronization proved in a deterministic setting and the synchronized state itself are robust under small stochastic perturbations.
The paper is organized as follows.In Section 2, we introduce the integrate and fire neural network with Gaussian noise.In Section 3, we give our main results on synchronization.In Section 4, we present numerical simulations.All the proofs are contained in Section 5 and in the Appendix which precises the large deviation principle for Ornstein-Uhlenbeck process.

Model
We consider a leaky integrate and fire neural network with Brownian noise.We suppose the network contains N neurons which are labelled by an index in the set I := 1, N .For each i ∈ I, the quantity V i,ε (t) represents the membrane potential of the neuron i at time t ≥ 0, in presence of a noise whose intensity is parametrized by ε ≥ 0.
As usual in integrate and fire neural networks, the dynamics of the model considered here has two regimes: a sub-threshold regime and a firing regime.We say that the network is in the sub-threshold regime if the potential of all the neurons is smaller than a fixed threshold potential θ .When the potential of one neuron in the network reaches the threshold, it emits a spike and we say that the network is in the firing regime.Then, the potential of the spiking neurons is reset to a potential V r (smaller than θ ) and the neurons that do not spike, but are connected with at least one spiking neuron, suffer a jump in their potential depending on the synaptic weights.Now we detail the two regimes of the network.For all n ≥ 0, we denote by τ ε n the n-th instant when the network is in the firing regime.More precisely, by convention τ ε 0 = 0 and for all n ≥ 0 the instant τ ε n+1 is the first instant after τ ε n when a spike is emitted in the network.We suppose that the sequence (τ ε n ) n∈N obeys the following induction: where, Ṽ i,ε n is the solution of the stochastic differential equation where γ > 0, β > θ and (W 1 t ) t≥0 , (W 2 t ) t≥0 , . . .(W N t ) t≥0 are independent Brownian motions.The constant γ is related to the resistance R and the capacity C of the neural membrane and is equal to 1/RC.The parameter β takes into account a constant external current I ext and satisfies β = RI ext .Note that for n ≥ 1, the quantity V i,ε (τ ε n ) is the potential of the neuron i just after the n-th firing regime (we will describe it soon).At time τ ε 0 = 0, we suppose that the potentials V i,ε (0) are included in [α, θ ), where α < 0.
It follows that for each n ∈ N and i ∈ I, the process ) is an Ornstein-Uhlenbeck process.So, it can also be written as where In the sub-threshold regime, that is for t ∈ (τ ε n , τ ε n+1 ) with n ≥ 0, we suppose that the membrane potential of each neuron satisfies Now we describe the firing regime.We denote by V i,ε (τ ε n −) the value of the potential of neuron i, in the sub-threshold regime just before the instant τ ε n .It is a shorthand notation for: We now define step by step the set J(n) of neurons which spike at time The set J 0 (n) corresponds to the neurons which spike spontaneously (i.e.without any interaction with the other neurons in the network).In practice, in our model, the set J 0 (n) is almost surely a singleton.We then define where the constants H ji are the synaptic weights.They represent the effect on the neuron i of a spike emitted by the neuron j.We assume the network fully connected and purely excitatory m := min i, j∈I H ji > 0. (5) Note that a neuron i of J 1 (n) has a membrane potential smaller than θ at time τ ε n − but larger than θ after receiving at time τ ε n the kicks H ji of the spiking neurons j of J 0 (n).The set J 1 (n) is therefore the set of the neurons which spike at time τ ε n by interaction with the neurons of J 0 (n).In the same way, we define by induction the sets is the set of the labels of the neurons which emit a spike at time τ ε n .Note that this union is finite, since our definition implies that J p+1 (n) ⊂ (∪ 0≤q≤p J q (n)) C .Therefore, a neuron can not spikes twice (or more) at time τ ε n .Another way to obtain this property could be to introduce a refractory period in the previous model, but it would add technical difficulties.Nevertheless, the model considered in this paper can be interpreted as the limit of the similar model with a refractory period, when the refractory period goes to zero.Now we can define the state of the network at time τ n , at the end of the firing regime, by The definition of This ends the n-th firing regime and the network starts again to evolve according to the equation ( 4) of the subthreshold regime.

Results
Recall that the network is synchronized at time t if all the neurons spike simultaneously at time t.We have already introduced the sequence of spiking times • at which at least one neuron emits a spike.Here, we are interested in events of the form Obviously, we have In a deterministic framework, once the network is synchronized it remains synchronized for ever.But when stochastic perturbations are considered the potentials of the neurons do not remain equal after synchronization.Therefore, it is not guarantied that the network get synchronized again at the next spiking time.Nevertheless, an estimation of the probability of this event is provided by the following theorem: Theorem 3.1.There exists ε 0 > 0, such that for every ε ∈ (0, ε 0 ] the probability that the network stays synchronized at time τ ε n+1 given that for some integer n it is synchronized at time τ ε n satisfies, This result essentially shows that the synchronization is not destabilized by small stochastic perturbations.More precisely, if the intensity ε of the noise goes to 0, then the probability that the network stays synchronized goes to 1, and for sufficiently small intensity of the noise the network stays synchronized with high probability.The proof of Theorem 3.1 is given in Section 5.2.The main idea is to use a large deviations principle to obtain a lower bound on the probability that the potential of each neuron remains close to the deterministic synchronized solution until some neurons reach the threshold potential.In this case, the potential of the neurons are sufficiently close to each other at the firing time, and the interactions between neurons induce the synchronization.Now we study the probability that the networks get synchronized. Theorem 3.2.Assume (V i,ε (t), i ∈ 1, N ,t ≥ 0) evolves according to (4) and (6) and that the number N of neurons in the network satisfies Then, for all n ∈ [ θ −α m , mN θ −α ], the probability that the network synchronizes during the first n spiking times satisfies where Φ denotes the cumulative distribution function of a standard Gaussian random variable and (x) + := sup(x, 0) denotes the positive part of a real number x.
Note that the condition (9) ensures that the integer The sequence of events (BS ε n ) n is obviously increasing.So, we deduce from Theorem 3.2 that for every n ≥ n 0 (without any upper bound constraint on n) where p ε 1 , p 2 and p 3 are the dimensionless parameters defined by Note also that condition (9) can be written as N ≥ p 2 (p 2 + 2).An analysis of (11) shows that the lower bound on P(BS ε n ) is an increasing function of p ε 1 and a decreasing function of p 2 .Therefore, if n ≥ n 0 the lower bound on P(BS ε n ) increases when the intensity ε of noise decreases, and goes to 1 when ε goes to 0. This result shows that the phenomenon of synchronization observed in deterministic networks, such as those of Mirollo and Strogatz (1990), is stable under stochastic perturbations.In other words, in presence of small stochastic perturbations, the synchronization occurs in a finite time (smaller than τ ε n ) with a high probability which goes to 1 in the deterministic limit.Moreover as expected in excitatory networks, strong interaction between neurons facilitates the synchronization.Indeed, our lower bound on the probability of synchronization is an increasing function of the interaction parameter m.Furthermore, with larger m our estimation is valid for smaller number of neurons (see ( 9)) and smaller number of spiking times (n θ −α m ).
Remark 1.In the proof of Theorem 3.1, we obtain a control on the probability that Card(J 1 (n + 1)) = N − 1 given J(n) = I which is not a necessary condition for synchronization.Indeed, order the membrane potential of the neurons at time A sufficient condition for synchronization is This last condition is weaker than the bound we use in the proof of Theorem 3.1 but, unfortunately, even the law of V κ(i),ε (τ ε n+1 −) is out of reach: it is related to the order statistic of a family of N Ornstein-Uhlenbeck processes at the first time at which one of them hits θ .This law is unknown.Sacerdote and Zucca (2013) give first results in this direction in a very simplified setting with N = 2 neurons and γ = 0 (Perfect Integrate and Fire model).See also a recent review for the LIF model with N = 1 in Sacerdote and Giraudo (2013).Similar results in our context should permit to accurately evaluate the probability of synchronization.

Numerical simulations
In this section, we illustrate our theorems with figures obtained by numerical simulations of the model.
In Figure 1, we illustrate Theorem 3.1: we plot as function of the intensity of the noise ε the probability of the network to be synchronized at the (n + 1)-th firing time, given that it was synchronized at the n-th firing time.For this figure, the simulation has been done with N = 1599 neurons, γ = 100 s −1 , θ = −55 mV, V r = −70 mV, m = 0.75 mV, H ji = m for all i, j in 1, n , and three values of β : β = −52 mV (red), β = −54.7 mV (green) and β = −55.3mV (blue).The two first values of β satisfy our hypothesis β > θ , but not the last one.
As predicted by our theorem, for β > θ the plots show that once the network is synchronized, it stays synchronized with very large probability, provided the noise is sufficiently small.In other words, the synchronized state is stable under small stochastic perturbations.
Figure 2 shows in the upper panel the probability P (BS ε n ) of synchronization before the n-th firing of the network as a function of n.In the lower panel, the figure shows the probability P BS ε n \ BS ε n−1 of first synchronization at the n-th firing time versus n.The plots are done for several values of the intensity of the interactions.For the deterministic model (ε = 0), synchronization occurs in a finite time Mirollo and Strogatz (1990); Catsigeras and Guiraud (2013).In Figure 2, we see that even in presence of noise the synchronization occurs in a short time with a high probability.We also observe the same monotony properties as the upper bound (11): the higher is the intensity of the interactions, the higher is the probability that the network synchronizes fast.The initial condition of the membrane potential is i.The proof consists in estimating and P sup  in order to obtain Card(J( )) = N using Bayes formula.A lower bound of ( 13) is obtained in Lemma 5.2 for A = N n .The control of the quantity ( 12) is given in Lemma 5.4.
We first prove a technical lemma for the explicit expression of the potential V i,ε before the first spiking time of neuron i.
Lemma 5.1.For all n ≥ 1 and i ∈ I we have where 1 S denotes the indicator function of a set S, that is 1 S (ω) = 1 for ω ∈ S and 1 S (ω) = 0 for ω / ∈ S.
Proof.By formula ( 6) e γs dW i s and the lemma is true for n = 1.From the same formula (6) at time τ ε n+1 we have Supposing the lemma is true at rank n and using φ t (x + y) = φ t (x) + ye −γt we obtain Using and 15) and ( 16), we deduce and it immediately follows that the property is true at rank n + 1 Lemma 5.2.Assume the support of the initial condition is included in (α, θ ).Let n ≥ θ −α m then, we have From Lemma 5.1 we deduce that for any i ∈ I e γs dW i s < θ .

Now, remind that
On the other hand, by monotonicity of the cumulative distribution function Φ, for all a ≥ a 0 and σ 2 ≤ σ 2 0 , we have Then, with The result follows then from (17) and Lemma 5.3.For all n ≥ 0, t ≥ τ n , i ∈ 1, N , Proof.We start with n = 0.For all t, conditionally to the initial value V i,ε (0), Ṽ i,ε 0 (t) has a gaussian distribution with mean The proof of ( 19) for n = 0 is ended by using ( 18) and an integration with respect to the initial law V i,ε (0).
Using conditional probability, an upper bound on the first term is Furthermore, So, for n = 1, To conclude, we use a straightforward induction on n, splitting cases according to the position of Ṽ i,ε n (τ n ) with respect to α.
Lemma 5.4.Let A be in On the one hand, assuming A ≥ θ −α m , we apply Lemma 5.3 with C = Am.On the other hand, using the conditional independence of ( Ṽ For such an integer n, we can apply Lemma 5.4 with A = N n and Lemma 5.2 to obtain 5.2 Proof of the Result of Stability (Th.3.1) Firstly, we know that the system has the homogeneous Markov property, so have the same probability for any n.Thus, we prove the result of Theorem 3.1 for n = 0, that is we consider a system of N neurons initially synchronized, that is We estimate in this section a lower bound for the probability to stay synchronized at the next spiking time of the network τ ε 1 (see ( 1) and ( 2)).The main tool of the proof of Theorem 3.1 is the Large Deviation Principle for an Ornstein Uhlenbeck process.We give an explicit expression of the good rate function in the following Lemma.
Lemma 5.5.Let x be the solution of the linear equation and X ε the solution of the associated noisy dynamic.
A Large Deviation Principle for Ornstein Uhlenbeck Process We recall a result of Freidlin-Wentzell theory and apply it to the particular case of Ornstein-Uhlenbeck process.
We consider a one dimensional deterministic function (x(t),t ≥ 0) evolving according to where b : R → R is a uniformly Lipschitz continuous function.The main question is related to the distance between x and its stochastic perturbation by an additive noise: Precisely, we aim to estimate for T > 0 and δ > 0 (fixed).Answers to this classical question can be obtained thanks to Freidlin-Wentzell theory.We introduce the functional set The proof can be found e.g. in (Dembo and Zeitouni, 2010, Sect. 5.6).We apply with the set The LDP writes where • Γ and Γ denote the interior and the closure of Γ respectively.In our case the first and last quantities are equal, so it remains to evaluate inf As b(x) = −γx + K, we find an explicit expression for the previous quantity.since for all Λ ∈ H 0 1 such that Λ(T 1 ) = 0,