Rate chaos and memory lifetime in spiking neural networks

Rate chaos is a collective state of a neural network characterized by slow irregular ﬂ uctuations of ﬁ ring rates of individualneurons.Westudyasparselyconnectednetworkofspikingneuronswhichdemonstratesthreediffer-ent scenarios for the emergence of rate chaos, based either on increasing the synaptic strength, increasing the synaptic integration time, or clustering of the excitatory synaptic connections. Although all the scenarios lead tocollectivedynamicswithsimilarstatisticalfeatures,itturnsoutthattheimplicationsforthecomputationalca-pability of the network in performing a simple delay task are strongly dependent on the particular scenario. Namely, only the scenario involving slow dynamics of synapses results in an appreciable extension of the network's dynamic memory. In other cases, the dynamic memory remains short despite the emergence of long timescales in the neuronal spike trains. © 2022 The Authors. Published by Elsevier Ltd. This is an open


Introduction
Patterns of spontaneous activity of cortical neurons are typically highly irregular [1][2][3], showing Poisson-like statistics of inter-spike intervals on short timescales [4,5] and firing rate fluctuations over longer timescales [5][6][7][8][9][10][11].The classical theory accounting for the origin of neuronal irregular behavior invokes the paradigm of an approximate balance between strong excitation and inhibition which for the most time cancel each other out, leaving the network activity to be driven by fluctuations which intermittently interrupt the balanced conditions [12][13][14][15][16][17][18][19][20].Asynchronous balanced states with irregular weakly correlated local activity have been observed in networks of realistic spiking neurons [21][22][23][24].These states may be referred to as homogeneous, since the firing rates of all the neurons are approximately equal and stationary.
Nevertheless, introducing the balanced excitation-inhibition paradigm by itself does not resolve the second part of the variability problem, since it still does not explain how the networks of fast spiking neurons may generate rate fluctuations over slow timescales.These fluctuations are a hallmark of the state typically called rate chaos or heterogeneous state [25,26] to describe both the temporal rate variability and the nonuniform distribution of rates over the neurons.The rate chaos was first observed in models of networks of rate neurons [27], having explicitly demonstrated the transition from regular to chaotic dynamics [28].In contrast to that, the networks of spiking neurons in the thermodynamic limit are expected to be always chaotic [12].Nevertheless, the question concerning the transition from the homogeneous to heterogeneous chaotic regime in spiking networks is still debated, and a potential physical explanation may be that above the transition, both the fluctuations in neuronal inputs and outputs become strongly colored [29].Recently, much efforts have been made to understand the precise dynamical mechanisms underlying the rate chaos and its possible functional role for neural computations.Several scenarios leading to the onset of rate chaos in realistic networks of spiking neurons have been revealed.In particular, high variability and slow rate dynamics has been demonstrated in networks with clustered excitatory connections [15,30].Highly heterogeneous chaotic states were shown to emerge in a sparse random network for strong synaptic couplings [25,29].Finally, a transition to rate chaos has been observed when the synaptic integration time becomes large compared to the characteristic times of neuronal dynamics [16,17].
Rate chaos is believed to play an important part in facilitating the complex computations unfolding in the brain [31][32][33][34][35][36], since it has  Chaotic spiking networks have already been successfully trained to perform computational tasks such as generating signals, classifying inputs or predicting nonlinear dynamics [37,60,61].With this in mind, it is reasonable to assume that using heterogeneous rather than homogeneous chaotic states can improve the computational capabilities of the network.Indeed, many applications involve accumulating data over the timescales of seconds, which requires a comparatively long dynamic memory.In the present study we address the problem of whether the slow rate fluctuations observed in the heterogeneous chaotic state may provide the basis for such a dynamic memory.
As our basic model, in Section 2 we introduce a network of theta neurons with random sparse connectivity which is initially is set to a homogeneous state with stationary local firing rates and the Poissonlike uncorrelated spike trains of individual neurons.In Section 3, we demonstrate that by varying certain macroscopic control parameters, one may induce the transition to a heterogeneous chaotic regime (rate chaos) featuring slowly fluctuating firing rates.In particular, we consider three generic scenarios for such a transition, involving i) strengthening of the synapses, ii) slowing the dynamics of synapses and iii) clustering of excitatory connections.Remarkably, the spike trains and their statistics appear quite similar for all three scenarios.In Section 4, we investigate whether our system in the state of rate chaos may be applied as a dynamic memory network [38,39], intended to perform a simple delay task by preserving the input history within its internal state.The network is considered in a reservoir setting [40,41], whereby its dynamical state serves as a representation of the input stimuli and the desired response is extracted via a linear readout of output nodes.The maximal delay for which the task is fulfilled with satisfactory precision is used to estimate the dynamic memory lifetime.Surprisingly, the network performance shows a striking dependence on the method by which the transition to heterogeneous state is induced.We will show that the network's dynamic memory extends appreciably only for the scenario based on slow synapses, but remains virtually unchanged compared to that in a homogeneous state if the rate chaos is induced by strong synapses or clustered connections.

Model
Our main model is a network of N theta neurons [42,43] whose dynamics is given by where θ j ∈ S 1 , j = 1, 2, …, N are the local phase variables related to the respective membrane voltages by V j = tan (θ j /2).The input currents I j comprise of two terms I j = I b + s j , whereby I b denotes the constant bias current and s j is the synaptic current.The SNIPER bifurcation at I j = 0 mediates between the excitable (I j < 0) and oscillatory regime (I j > 0).A neuron j is said to have fired a spike when its phase crosses the value θ j = π.The spikes are filtered by double exponential synapses of the form which can account for the separate timescales of the rapid neurotransmitter binding followed by their slow unbinding [44].The parameters τ r and τ d denote the synaptic rise and decay times, respectively, r j is the synaptic output current, and t j p denote the firing times of neuron j.
The total synaptic current received by neuron j is given by where g is the coupling strength and A jk are the elements of the adjacency matrix determining the structure of the synaptic connections.Note that the described model ( 1)-( 4) with some modifications will provide for the reservoir within the computational framework elaborated in Section 4.
First we consider a sparse random network with the connectivity probability p = 0.1.For simplicity, we do not divide the neurons into excitatory and inhibitory pools and draw the coupling strength of each nonzero connection from the Gaussian distribution with a zero mean and a variance (Np) −1 .Together with a slightly negative bias current I b = − 0.001, this ensures the balance between excitation and inhibition [12,45], such that the overall network activity is fluctuationdriven rather than mean-driven [46].
To study the collective dynamics of the network, we have excited 10 arbitrary neurons and have simulated the network activity for t = 20 s.The synaptic rise and decay time constants are fixed to τ r = 2 ms and τ d = 20 ms, and the coupling strength g is used as the control parameter.One finds that if the coupling strength exceeds a certain critical value g c , the chaotic self-sustained activity emerges from the quiescent state.Indeed, for weak couplings g < g c , an elevated network activity observed upon stimulation is only transient and dies out in several seconds.The precise value of critical coupling strength g c depends on the bias current and equals g c ≈ 0.27 for the chosen parameter set.For the coupling strengths slightly above the threshold, the network activity is sparse, irregular and weakly correlated, as illustrated in Fig. 1 for g = 0.3.The spike trains of individual neurons are sub-Poissonian with the assembly-averaged firing rate 3.74 ± 1.24 Hz, and the respective distributions of inter-spike intervals characterized by the mean coefficient of variation 0.81 ± 0.13, see Fig. 1d) and e).Weak correlation between the outputs of different neurons is evinced by the average cross-correlation coefficient 0.002 ± 0.047, see Fig. 1f).The assembly-averaged autocorrelation function of individual neurons quickly decays at time lags ~50 ms that are of the order of the characteristic synaptic time, cf.Fig. 1c), indicating an absence of any longer timescales in local neuronal dynamics.3. Three scenarios for transition to rate chaos Further increase of the coupling strength results in the significant changes of the network state which is illustrated in Fig. 2 for g = 1.The stronger coupling not only increases the mean firing rate to 18.14 ± 6.96 Hz, but also induces a transition to the so-called rate chaos or heterogeneous state [25,29], where the firing rates of neurons exhibit slow fluctuations.At the level of spiking dynamics, the signature effect is that the neurons tend to generate bursts of spikes alternating with long periods of quiescence.Strong variability of spike trains is corroborated in Fig. 2e), which indicates large coefficients of variation of the inter-spike intervals, whose mean value 1.63 ± 0.30 is substantially supra-Poissonian.The emergence of bursting dynamics is also reflected in the shape of the assembly-averaged autocorrelation function, which now decays at longer lags ~400 ms, corresponding to a typical duration of a burst, see Fig. 2c).Also note that in the heterogeneous regime, the dynamics of individual neurons becomes more correlated compared to the homogeneous state, cf.Fig. 2f), as corroborated by the value 0.002 ± 0.16 for the mean correlation coefficient of individual spike trains.
In Fig. 3 are provided the dependencies of certain characteristics of the network dynamics on the coupling strength g.Although for g = g c the onset of chaotic spiking from quiescence is sudden, the transition from the homogeneous to heterogeneous irregular activity is a rather smooth one.In particular, the average firing rates illustrated in Fig. 3a) show a steady growth, and the coefficient of variation of neuronal inter-spike intervals increases almost linearly with coupling strength, see Fig. 3b).Similarly, the average correlation time displays small values ~τd for weak coupling and increases to hundreds of milliseconds in case of strong coupling, cf.Fig. 3c).Note that the activity always remains non-uniform over the neurons with the standard deviation of local firing rates being of the same order as their mean values.
Apart from increasing the coupling strength, another scenario previously reported to induce transition to rate chaos is based on increasing the characteristic synaptic time [16,17].To test for such a scenario in our model, we keep the coupling strength fixed at g = 0.5 and vary the synapse decay time τ d .The results in Fig. 4 reveal that slowing down the synapses indeed induces the transition to heterogeneous irregular activity.Although the average firing rate does not change appreciably and equals ≈8 Hz for any τ d > 20 ms, see Fig. 4a), the mean coefficient of variation for the inter-spike intervals in Fig. 4b) visibly grows with synaptic time.The correlation time also shows a significant increase, reaching values larger than 600 ms for τ d > 60 ms, see Fig. 4c).The features of heterogeneous irregular activity observed for slow synapses are similar to those in case of strong couplings except that the overall network activity r = < r j > shows slower fluctuations, see Fig. S1 of the Supplementary material.
So far we have considered only networks with a random connection topology.However, real cortical microcircuits are highly non-random [47][48][49], and the statistically nonuniform network structure may itself induce heterogeneous activity.For example, clustering of synaptic connections was shown to give rise to slow fluctuations of firing rates and high variability of spike trains [15,30,50,51].In our model, it turns out that clustering of all connections does not result in a significant change of the network activity.However, clustering of excitatory (positive) connections alone yields a notable effect.To introduce clustering, we divide the network into M = 5 equal groups and rewire the excitatory connections in such a way that the connectivity inside each group p in becomes larger than the connectivity between the groups p out .The clustering coefficient R = p in /p out is introduced to measure the degree of clustering, whereby the value R = 1 corresponds to a homogeneous Fig. 2. Heterogeneous state of the network with strong synapses.Presentation style and the network parameters are the same as in Fig. 1, except for the coupling strength g = 1.Presentation style is the same as in Fig. 3.
random network.Note that the described rewiring scheme preserves the total network connectivity p = p in /M + p out (1 − 1/M).Fig. 5 shows the characteristics of the network dynamics depending on the clustering degree R. Note that due to the small size of clusters, the network dynamics significantly depends on the particular realization of the connectivity matrix, and the results we present were obtained by averaging over 20 different network configurations.Though the average firing rates in Fig. 5a) do not show much variation with R, one still notes the onset of heterogeneous activity and the highest variability at an intermediate clustering degree R ~4.While the mean coefficient of variation exceeds 1 only slightly, its standard deviation is about the same, see Fig. 5b), which indicates that there is a significant number of neurons with CV≈2 and larger.The mean correlation time also peaks at intermediate clustering degrees and reaches several hundreds of milliseconds, see Fig. 5c).The manifestation of heterogeneous irregular activity for networks with intermediate clustering is similar to the two previous scenarios.Namely, the neurons tend to generate bursts of spikes rather than isolated spikes, while the spike trains of different neurons remain weakly correlated.Note however that the local spiking rates, as coarse-grained quantities, tend to become correlated within the clusters, which induces pronounced slow fluctuations of the cluster activities, see Fig. S2 of the Supplementary material.
For larger clustering degrees, the local imbalance between the excitation and inhibition becomes too strong and one of the clusters typically settles into a regime of mean-driven activity with fast and regular spiking, cf.Fig. S3 of the Supplementary material.At the same time, this cluster tends to inhibit and synchronize other clusters, suppressing the irregular activity.Note that the time-averaged activity of neurons becomes quite diverse in this regime because of the high firing rate of the active cluster and low firing rates of the other ones.

Computational capabilities of the network
After the detailed study of the network dynamics, we investigate the relation between its intrinsic activity and computational capabilities, in particular its ability to serve as a dynamic memory network [38,39].This recently introduced concept involves creating of networks optimized for question-answering problems, whereby the network processes the input, forms an episodic memory and generates the relevant output.Here, we train the network to perform a simple computational delay task defined as follows: the network receives a single input in the form of a spike train and has to indicate whether it has received or not a spike within the period of a given duration.In particular, the network should respond by "1" if it has received at least one spike within the last τ milliseconds and by "0" otherwise.The maximum value of the delay τ at which the network shows a sufficient accuracy provides a reasonable estimate for its dynamic memory lifetime.
As an input signal, we use a Poisson spike train with the rate λ = 1 Hz.To feed the input into the network, the signal (4) received by each neuron is modified to where g inp is the input gain, the weights u j are drawn independently from a uniform random distribution [−1; 1], and r inp is the input synaptic current given by the same set of equations as Eqs.( 2) and ( 3).The output of the network is calculated as having tuned the output weights w j to train the network to perform the required task.The network response counts as "1" if its output exceeds 1/2 and as "0" otherwise.The described approach conforms to the classical computational framework of reservoir computing [40,52], a machine learning method derived from liquid-state and echo-state machines [53,54] to solve tasks using the response of a dynamical system, called a reservoir, to a certain input, having the output generated by linearly combining the states of the readout nodes.Reservoir computing has the advantage of an efficient training process, since only the readout weights affecting the output are trained, while the input weights and the weights within the reservoir remain unchanged [41].In training the output weights w j , we have used the method of least squares or the recursive least square algorithm [55].Both methods turned out to provide similar results, and we preferred the least squares method for being the faster of the two.After a certain training period t train , the network's performance was estimated during the test period t test = 100 s, see Fig. 6.To characterize the network performance, we have measured the error   where ν 0 and ν 1 are the rates of the false output being equal to zero or one, respectively, and ε 0 and ε 1 are the error weights set such that a constant output of either zero or one leads to a total error ε = 1.The network performance P is then estimated as the inverse of the error P = 1/ε.One observes that the resulting performance improves with the increase of the training time and reaches maximum after several tens of seconds, see Fig. S4 of the Supplementary material.In our numerical experiments, we have used the training time of t train = 100 s which warrants an optimal network performance.One also notes an important role of the input gain g inp for the network performance.In particular, if g inp is too small, the input signal fails to suppress the chaotic activity, resulting in a poor network performance.Thus, the input gain has to be sufficiently strong, but at the other hand, its excessive increase does not further a significant improvement in the network performance, see Fig. S5 of the Supplementary material.With this in mind, we have fixed g inp = 10 in all the numerical experiments considered below.
To address the problem of enhancing the dynamic memory lifetime of the network, we have trained it to perform the delay task for different values of the delay τ and have analyzed the performance P. The results presented in Fig. 7 show the performance in terms of delay for several different network configurations.In particular, we have started from the network with weak and fast synapses and no clustering of connections (g = 0.5, τ d = 20, R = 1), which admits a homogeneous state.The optimal performance P ≈ 15 is reached for the delay τ ≈ 200 ms.For small delays τ < 100 ms, the network performance is poor since it does not have enough time to respond to the stimulus.For large delays, the performance quickly drops and reaches a half of the maximal value at τ ≈ 500 ms.This indicates that the dynamic memory of the network lasts about 0.2 s.
Next, we increased the strength of the synapses to g = 2 and checked whether it would improve the network performance.For such a strong coupling, the spike trains demonstrate a much longer correlation time of about 250 ms, see Fig. 3b), which would intuitively suggest a much longer dynamic memory.Surprisingly, however, the strengthening of the synapses changed the network performance only by a small margin, and the observed dynamic memory was still about 0.25 s.We have also studied the influence of clustering with R = 4, the value corresponding to the maximal correlation time of about 230 ms, see Fig. 4b).This has led to a slightly improved network performance, which becomes optimal for τ = 300 ms and drops twice at τ = 600 ms.Thus, the dynamic memory lifetime in a clustered network increases only slightly compared to the network in the homogeneous state and reaches 0.3 s.
Finally, we have considered the scenario involving slow synapses by having increased the synaptic decay time to τ d = 60 ms.This approach has impacted most profoundly on the dynamic memory of the network.We have found that the network performance has indeed substantially improved, with the flat maximum P ≈ 30 reached at the wide interval τ = 500 − 1000 ms.For larger delays, the network performance quickly deteriorates, such that the network's dynamic memory can be estimated as 1 s.We have also checked whether the extension of dynamic memory may be caused by the change of stimulus itself, since for longer synaptic times it comprises longer pulses.To do so, we have decreased the decay time of the input synapses only to τ d = 20 ms and kept the internal synapses with τ d = 60 ms, observing that the network performance has marginally decreased but has still substantially outperformed the network with fast synapses.Indeed, the dynamic memory lifetime in this case is still about 1 s.

Discussion and conclusions
We have considered three different scenarios for the onset of rate chaos in a sparse network of spiking neurons and have examined their implications for the computational capability of the network to perform a simple dynamic memory task.The signature of rate chaos are the slow rate fluctuations of individual neurons, which introduce characteristic timescales longer than the one associated with the spike timing dynamics.Three generic scenarios of transition to rate chaos were considered.The first scenario involves strengthening of the synaptic coupling, the second one relies on slowing the dynamics of synapses, whereas the third one is observed when a certain degree of clustering is applied to synaptic connections.Though all three scenarios have been reported previously, they were still considered separately for different network models, and the present study provides for the first time a universal model where all the scenarios can be observed and compared by changing different system parameters, such as the coupling strength, the synaptic decay time and the clustering degree.
At the microscopic level of individual neuronal spike trains, all the three scenarios have been shown to yield very similar features.Nevertheless, the computational capabilities of the underlying regimes turned out to be quite different, at least in performing simple delay-related tasks.We have demonstrated that the dynamic memory of the network in the heterogeneous state depends significantly on the mechanism giving rise to this state.Namely, for scenarios involving strong synapses or clustering of connections, the duration of dynamic memory remains approximately the same as in the homogeneous state.Contrasting that, the scenario involving the slow synapses leads to a substantial increase in the duration of dynamic memory, found to reach values above 1 s.
Identifying mechanisms that allow cortical networks to perform computational tasks on the timescales of seconds while the local neuronal activity unfolds on the timescales of milliseconds has been a longstanding problem in neuroscience [10,29,[56][57][58].It has already been shown that merely increasing the network size has very little impact on the memory lifetime since it scales only as the logarithm of the network size [59].Thus, the underlying problem has to be resolved in terms of finding an appropriate mechanism that endows the network dynamics with long timescales.Our study nevertheless indicates an additional subtlety in the sense that the mere presence of longer timescales in the spike trains may not warrant longer memory lifetimes.
We emphasize that our findings should by no means be interpreted as questioning the general usefulness of heterogeneous chaotic states as a substrate for computation in networks of spiking neurons.Such states have already been shown beneficial for complex computational tasks including decision-making, categorization or associative memory [25,  32 ,33,35,36].Our results rather imply that spiking networks in a state of rate chaos are not optimal candidates for dynamic memory networks, so that finding a more suitable paradigm is required to carry out computations involving an extended temporal memory.
CRediT authorship contribution statement V.V.K., A.V.K., I.F., M.P., and M.S. designed and performed the research as well as wrote the paper.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Contents lists available at ScienceDirect Chaos, Solitons and Fractals Nonlinear Science, and Nonequilibrium and Complex Phenomena j o u r n a l h o m e p a g e : w w w .e l s e v i e r .c o m / l o c a t e / c h a o s been shown to support fluctuations over longer timescales, giving rise to slow neural activity associated with behavior, learning and memory.

Fig. 1 .
Fig. 1.Homogeneous state of the network.(a) Spike trains of five randomly selected neurons; (b) Average activity of the network 〈r〉 = 1/N∑ j=1 N r j ; (c) Averaged autocorrelation function of neurons; (d) Distribution of the local firing rates.(e) Distribution of coefficients of variation of neuronal inter-spike intervals.(f) Distribution of crosscorrelation coefficients between neuronal inter-spike intervals.Network parameters: N = 400, p = 0.1, τ r = 2 ms, τ d = 20 ms, g = 0.3.

Fig. 3 .
Fig. 3. Statistical features of network dynamics in dependence of coupling strength g.(a) Average firing rate of neurons (solid line) plus/minus its standard deviation (dashed lines).(b) Average coefficient of variation (solid line) plus/minus its standard deviation (dashed lines).(c) Average correlation time of the spike trains.

Fig. 4 .
Fig. 4. Statistical features of network dynamics in terms of the synaptic decay time τ d .Presentation style is the same as in Fig. 3.

Fig. 5 .
Fig. 5. Statistical features of network dynamics in terms of clustering coefficient R. Presentation style is the same as in Fig. 3.

Fig. 6 .
Fig. 6.Training the network and testing its performance.(a) Time trace of the average network activity; (b) Network output (gray) and the desired output (black); (c) The input signal.

Fig. 7 .
Fig. 7. Performance of the network P in terms of the delay τ.Solid line with circles: network in a homogeneous state; dotted line with squares: network with strong synapses (g = 2); dash-dotted line with stars: network with clustered connections (R = 4); dashed line with diamonds: network with slow synapses (τ d = 60 ms); thick dashed line with diamonds: network with slow internal synapses (τ d = 60 ms) but fast input synapses (τ d, inp = 20 ms).