Network structure of neural systems supporting cascading dynamics predicts stimulus propagation and recovery

Many neural systems display cascading behavior characterized by uninterrupted sequences of neuronal firing. When the distributions of cascade size and duration follow a power law, theoretical models suggest that such dynamics support optimal information transmission and storage. However, the unknown role of network structure on neural dynamics precludes an understanding of how variations in network structure either support or impinge upon information processing. Here, we develop a theoretical understanding of how network structure supports information processing through network dynamics and validate our theory with empirical data. Using a generalized spiking model and mathematical intuitions from linear systems theory, network control theory, and information theory, we show how network structure can be designed to temporally extend the propagation and recovery of certain stimulus patterns. Moreover, we observe cycles as structural and dynamic motifs that are prevalent in such networks. Broadly, our results demonstrate how network structure constrains cascading dynamics and supports persistent activation that could potentially contribute to cognitive faculties, such as working memory or attention.

In many neural systems, neurons display spontaneous cascades of activity, in which neurons spike in patterns across consecutive time windows. These cascades often form heavytailed distributions of size and duration. When such distributions follow a power law, those cascades are called avalanches and have been linked to optimal information processing in critical systems (Beggs & Plenz, 2003;Kinouchi & Copelli, 2006). However, often left implicit in the analysis of this phenomenon are (i) the computational function of avalanches without the assumption of self-organized criticality (Priesemann et al., 2014;Touboul & Destexhe, 2017) and (ii) the network structure that supports various cascading dynamics (Perin, Berger, & Markram, 2011;Brunel, 2016). Thus, it is not yet clear how network structures shape cascading dynamics, which in turn perform computations (Larremore, Shew, & Restrepo, 2011;Chambers & MacLean, 2016).
Here, we address this gap in knowledge through a series of analyses and numerical simulations of a stochastic neural network model instantiated upon various network structures. Using linear systems theory, network control theory, and information theory, we demonstrate that network structure constrains cascade duration, identify topological features that extend cascade duration, and show that long cascade duration allows recovery of stimuli. Importantly, we empirically validate these theoretical results with multielectrode array (MEA) recordings from neurons in the mouse somatosensory cortex (Ito et al., 2016). Collectively, our findings show that the network topology reported extensively in the empirical literature supports the persistent activation of a cluster of neurons, which in turn allows for the recovery of stimulus patterns implicated in working memory (Durstewitz, Seamans, & Sejnowski, 2000;Eriksson, Vogel, Lansner, Bergström, & Nyberg, 2015).

Network structure constrains cascade duration
To investigate the role of network structure, we first formalize a network as a directed graph of nodes V = {1, . . . , n} and edges E ⊆ V ×V , and we represent this graph as a weighted adjacency matrix A = [a i j ] (Figure 1a). We model the activity of an n-neuron network as a binary vector y(t) ∈ {0, 1} n that evolves stochastically as y i (t) ∼ B(a i · y(t − 1)), where B(p) is a Bernoulli process with probability min(0, max(1, p)).
The average state of the stochastic model can be written as E[y(t)] = x(t), and given equal initial states x(0) = y(0) and ∀i ∈ V : ∑ j a i j ≤ 1, this average neural activity evolves as a linear system x(t) = Ax(t − 1) (Ju, Kim, & Bassett, 2019).
From this formulation, we can analytically represent any pattern of neural activity y(t) at time t as a state s i ∈ {0, 1} n .
Then, we can write the evolution of activity as a Markov process, where the probability that the network is in any state is given by the probability vector p(t) = [P(y(t) = s 1 ); . . . ; P(y(t) = s 2 n )]. As a Markov process, we can write the evolution of this probability vector as a linear map according 94 This work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0 to a transition matrix T such that p(t) = T p(t − 1) = T t p(0). Then, the fraction of cascades that terminate by time t is simply given by the first entry of p(t) where s 1 = 0 (Figure 1b).
Because the transition matrix is an explicit and deterministic function of the network T = f (A) (Ju et al., 2019), the cascade duration is analytically constrained by the network A. As a more general and tractable description, we then numerically demonstrated the role of network structure on cascade duration by using eigenvalue analysis. Because the average of the stochastic model E[y(t)] = x(t) evolves linearly as x(t) = Ax(t − 1), we used the dominant eigenvalue of the network matrix λ 1 = max λ i ∈eig(A) |λ i | to predict the distribution of cascade duration. We fitted such a distribution using maximum likelihood estimation (MLE) as a truncated power law p(x) ∼ x −α e −x/τ with parameters α, the log-log slope of the power law, and τ, the scale of the exponential truncation (Clauset, Shalizi, & Newman, 2009;Alstott, Bullmore, & Plenz, 2014). To avoid overgeneralizing, we bounded τ by the max cascade duration and denoted this new value as τ . In simulations of cascades on 192 networks of 2 8 nodes, the parameters α and τ were monotonically and linearly correlated with λ 1 , as reflected in a Spearman's ρ of 0.97 (p ≈ 0) and a Pearson's r of 0.99 (p = 3.9 × 10 −31 ; λ 1 > 0.8), respectively To empirically validate this theory, we used λ 1 to predict cascade duration in 25 multielectrode (MEA) recordings of spiking neurons in the mouse somatosensory cortex (Ito et al., 2016). To measure λ 1 , we derived directed networks from the recordings using vector autoregression . Then, we binned the spikes with 2ms bins, identified cascades of consecutively active bins, and fitted distributions of cascade duration to truncated power laws. In these recordings, the parameters α and τ are monotonically and linearly correlated with λ 1 , as reflected in a Spearman's ρ of 0.78 (p = 7.4 × 10 −6 ) and Pearson's r of 0.62 (p = 9.9 × 10 −4 ), respectively (Figure 2c,d).
Moreover, in simulations of cascades on these empirical networks, we find a significant positive correlation between τ and

Cyclical network topology extends cascades
Previous empirical studies have found cycles in the topology of cortical connectivity (Wang et al., 2006;Ko et al., 2011).
Here, we hypothesized that these cycles allow networks to display heavy-tailed distributions of cascade duration. To test this hypothesis, we simulated cascades in networks with varying degrees of cycle density, defined as the number of simple cycles divided by the number of edges. We found that as cycle density increases with edge rewiring, cascade duration also increases with a Pearson's correlation coefficient of r = 0.82 (p = 1.1 × 10 −27 ; Figure 3a).
Depending on the extent of refractoriness, these cycles, even if they exist structurally, may not support cyclical propagation of activity. Thus, to ensure that activity can propagate through cycles in neuronal systems, we measured the number of n-cycles in the MEA data, where an n-cycle occurs when a neuron spikes again after n time bins of its previous spike. We found that 1-, 2-, 3-, and 4-cycles occur an average of 1.9 times per cascade, with an average of 2.7 × 10 5 ± 1.6 × 10 4 cascades per recording (standard error; Figure 3b).

Highly controllable neurons extend cascades
Even within a single network, the duration of cascades can vary depending on the nodes that are stimulated, either spontaneously or exogenously. Recently, studies have used network control theory to determine the capacity of a node in controlling the state of a network (Yan et al., 2017). In particular, one metric called average controllability measures the impulse response of a node (Pasqualetti, Zampieri, & Bullo, 2014), and thus, we hypothesized that it can predict cascade duration.
To test this hypothesis, we simulated cascades on 100node random networks by stimulating individual nodes. We observed that the mean cascade duration and finite average controllability To empirically validate this theory, we tested these predic-tions in the same MEA recordings used previously. We find that cascade duration is correlated with mean average controllability of active neurons in the initial T states with an average Spearman's ρ of 0.16 for T = 1 and 0.24 for T = 2 (p < 0.001, Bonferroni-corrected; Figure 4b). Considering that the cascades are stochastic and cannot be predicted deterministically, we find it notable to observe any correlation between controllability and cascade duration in empirical data. These results suggest that stimulus patterns must be tailored for a network to produce the desired neural dynamics and lead to the question of how stimulation, either endogenous or exogenous, can be used for information processing.

Lasting cascades allow information recovery
If certain networks and stimulus patterns can produce longlasting cascades, how can these long-lasting cascades contribute to information processing? Intuitively, one cannot recover information about stimuli from cascades that have already terminated, but for lasting cascades, network states can be discriminated and provide information about stimuli. Such delayed recovery of stimuli can allow the associative learning of stimuli across temporal delays (Durstewitz et al., 2000;Eriksson et al., 2015).
To test this intuition, we first show in our Markov formulation that lasting cascades allow information recovery. We define stimulus recoverability as the mutual information I(S;Y t ) between initial states y(0) ∈ S and states y(t) ∈ Y t at time t. Given two initial states y i (0) and y j (0), the probability vectors of the two cascades evolve as p i (t) = T t y i (0) and p j (t) = T t y j (0). For quickly decaying systems, p i (t) and p j (t) will both have high probabilities of the zero state s 1 , inherently reducing recoverability.
To numerically demonstrate this relationship, we simulated cascades on four types of networks (weighted random, random geometric, modular, Watts-Strogatz) and measured the mutual information I(S;Y t ) at each t and cascade duration.
Consistent with our intuition, we observe that mutual information is maintained longer when cascades last longer on average. To quantify the relationship between cascade duration and the decay in mutual information, we calculated the Pearson's correlation between average cascade duration and the slope of the linear regression between mutual information and time (Figure 5a). For all network types, we found mean correlation coefficients greater than 0.9 (Figure 5b). Collectively, these findings demonstrate the interplay among network structure, dynamics, and information processing.

Conclusion
Neural systems display cascading dynamics that harbor the marks of a complex underlying network structure and support diverse range of computations (Beggs & Plenz, 2003;Kinouchi & Copelli, 2006). Yet, precisely how network structure supports computations through cascading dynamics remains unclear. Here, using the rich mathematical properties of linear systems, we describe how network structure and stimulus patterns together determine the manner in which a stimulus -8 x -10 -3 -2 mean duration Figure 5: Lasting cascade allow stimulus recovery. a, In an example network, mean duration of a stimulus pattern is correlated with the rate of decay in mutual information (MI) between the stimulus pattern y(0) and a later network state y(t). b, The mean correlation between MI decay rate and mean duration is above 0.9 in simulations of four graph types (WR: weighted random; RG: random geometric; M4C: modular with 4 communities; WS: Watts-Strogatz, small world).
propagates through the network. We then demonstrate that long-lasting cascades can allow for temporally delayed recovery of desired patterns of stimulation. Importantly, we validate these results in empirical data. Broadly, our work blends dynamical systems theory, network control theory, information theory, and computational neuroscience to address the wide gap in the field's current understanding of the relations between architecture, dynamics, and computation.