Simple model of complex dynamics of activity patterns in developing networks of neuronal cultures

Living neuronal networks in dissociated neuronal cultures are widely known for their ability to generate highly robust spatiotemporal activity patterns in various experimental conditions. Such patterns are often treated as neuronal avalanches that satisfy the power scaling law and thereby exemplify self-organized criticality in living systems. A crucial question is how these patterns can be explained and modeled in a way that is biologically meaningful, mathematically tractable and yet broad enough to account for neuronal heterogeneity and complexity. Here we derive and analyse a simple network model that may constitute a response to this question. Our derivations are based on few basic phenomenological observations concerning the input-output behavior of an isolated neuron. A distinctive feature of the model is that at the simplest level of description it comprises of only two variables, the network activity variable and an exogenous variable corresponding to energy needed to sustain the activity, and few parameters such as network connectivity and efficacy of signal transmission. The efficacy of signal transmission is modulated by the phenomenological energy variable. Strikingly, this simple model is already capable of explaining emergence of network spikes and bursts in developing neuronal cultures. The model behavior and predictions are consistent with published experimental evidence on cultured neurons. At the larger, cellular automata scale, introduction of the energy-dependent regulatory mechanism results in the overall model behavior that can be characterized as balancing on the edge of the network percolation transition. Network activity in this state shows population bursts satisfying the scaling avalanche conditions. This network state is self-sustainable and represents energetic balance between global network-wide processes and spontaneous activity of individual elements.


Introduction
Exploiting physics' concepts for dealing with problems in life sciences is a widely recognized and successful strategy for developing systematic and lawful understanding of complex phenomena observed in empirical data. One of the most striking and fashionable illustrations facilitating potential and power of this approach is the well-known example of using the concept of self-organized criticality (SOC)-the ability of systems to selftune to the critical statefor explaining a number of puzzling effects in biological systems. Initially proposed as a model for explaining how an abstract system can remain at a critical state in presence of perturbations [1,2], the concept is now broadly used for describing biological neural networks (see e.g. [3,4]). It was shown that adaptively evolving networks, i.e., networks combining structural evolution of the network topology with dynamics in the network nodes [5], can exhibit highly robust global SOC-like behavior maintained by simple local network adjustment rules.
Striking examples of SOC-like behavior have been found in experimental studies of neuronal cultures [6][7][8], with potential benefits of criticality in neural systems discussed in [9], [10]. The cultures grow autonomously and form synaptically coupled networks of living cells. After a period of initial growth and development the cultures start to generate spontaneous activity patterns in the form of population bursts. These bursts are shown to satisfy the power scaling law and hence are often referred as neuronal avalanches [6,7].
Since then a number of mathematical models have been proposed for simulation and analysis of spontaneous burst generation in neuronal networks. The spectrum of network's features linked to emergence of persistent bursts includes, but is not limited to re-wiring, delays [11,12], cycles [13], [14], frequency dependent and spike timing dependent synaptic plasticity [15][16][17]. With regards to the mathematical frameworks describing neuronal avalanches, models of network's growth [18] and stochastic networks [19] have been put forward. These results advance our understanding of the phenomenon substantially; yet macroscopic physical mechanisms (expressed e.g. in terms of energy balances) that are responsible for steering living neuronal networks to the burst multiscale dynamics are still unclear.
It has been shown recently (see e.g. [20] and references therein) that population spikes and bursts can be attributed to cells' adaptation and short time plasticity mechanisms. The authors showed that population spikes, that are similar to the ones observed in in-vitro cultures, can occur in networks of excitatory model neurons with leaky integrate and fire dynamics. These neurons where subjected to Gaussian white noise and equipped with adaptation and short term plasticity mechanism. The network connectivity was all-to-all. Effect of growing neuronal connectivity on bursts was studied in [21]. It has been found that network connectivity expressed e.g. as the number of synaptic connections per neuron, may play an important role for spiking and bursting activity in cultures. Additional links between connectivity development, firing activity homeostasis, and criticality are exposed in [22].
In this work we further contribute to the idea that several features of complex and critical behavior (e.g. the neuronal avalanches, super-bursts, periodic and chaotic spiking) observed in live neuronal cultures and networks can be explained by just few variables. These variables can be linked to local connectivity patterns (expressed e.g. by connection densities between cells) and neuronal activation dynamics.
We demonstrate that main critical transitions can be captured by a hierarchy of simple models. Starting from elementary phenomenological description of neural firing we present a simple 2D mean-field model that is capable of showing a broad range of behaviors that are observed in cultures. Remarkably, the network connectivity parameter appears to be a natural bifurcation parameter of the model; it regulates emergence of activity spikes from the initial silent mode to bursts to whole-network activation. This is in good agreement with previous works on SOC phenomena in developing neuronal networks [23], [22]. In contrast to [23], [22], however, we consider the problem from a different angle. Instead of focusing on the activity-connectivity interplay we consider and analyze the system dynamics in the activityenergy plane for various values of connectivity. This adds additional modelling capabilities with regards to investigating effects of oxygen and energy deprivation on neuronal networks behavior whilst retaining important links between network activity and connectivity. Moving further to multi-agent model reveals emergence of neuronal avalanches showing scale-free activity.
The manuscript is organized as follows. In Results we present ingredients of the model at three different levels of phenomenological detail. We begin with a simple percolation-based geometric model describing the evolution of cells' connectivity. The model allows to accommodate biologically relevant features such as axons and dendrites; it also enables to replicate directional connectivity that is inherent to living systems including neuronal cultures. The model analysis reveals that sharp changes in the overall clustering and connectivity of the evolving network, in both directed and undirected settings, is determined by a single parameter describing average connection density in the network. The effect is qualitatively consistent with empirical evidence reported e.g. in [24]. The analysis is followed by expressing dynamics of neuronal activity by a mean-field approximation. We show that the corresponding singledimensional model does not explain network spikes and bursts frequently observed in developing cultures. This limitation, however, can be resolved if neural activation is linked to an additional exogenous regulatory energy variable. Introduction of the latter variable needs an additional comment. It behaves as a sort of "energy" or a resource. Its physical nature, nevertheless, may or may not be associated with a specific type of the physical energy. A prototype of such energy, the notion of adaptation energy, was introduced by Selye in his analysis of physiological adaption [25], [26] and was successfully employed in modelling of various complex phenomena [27], [28]. The extended model, which may also be viewed as a resource-limited model [29], [30], is shown to be able to reproduce periodic spiking, irregular dynamics, and population bursts observed in live cultures. What is important is that dynamic regimes exhibited by the model can be regulated by just a single parameter corresponding to network connectivity. Next we provide results of large-scale simulation of evolving network of agents of which the activation probability depends on their current energy level. The network dynamics in this state shows population bursts satisfying the scaling avalanche conditions. This network state is self-sustainable and represents energetic balance between global network-wide processes and spontaneous activity of individual elements. The results are then compared with published empirical data. Details of numerical experiments are provided in Materials and Methods.

Model
Geometric model. We start with a geometrical arrangement of the network elements. Consider a network of N neurons whose spatial coordinates are randomly and uniformly distributed in the unit square. Each individual neuron is described by two basic elements. The first, is the region of reception of inputs represented by a circle of a given radius R. The circle models neuron's ability to sense input signals from other neurons, and is referred to as the dendrite region (in biology, dendrite is an input). The second element is an axon (in biology, output), which in our model is simulated by a straight segment of length H (on the mature stage of the network development H > R) and whose end point is acting as a transmitter of the neuron's signal. If this point reaches out to the dendrite region of another neuron, a connection is established between these neurons [18]. There are three different ways that yield geometrical coupling or connectivity of the network elements: Case 1: cells without axons, i.e. H = 0. In this case N circles with radius R are randomly and uniformly distributed in the unit square. If a circle A overlaps with a circle B, and circle B is connected with a circle C, then A is connected with C. Thus, a path between two distant cells can be defined as a chain of overlapping circles joining these cells. Emergence of large groups of connected elements in this network can be analyzed within the framework of standard circle percolation problem. Let n be the cells density defined e.g. as the number of circles' centers in a unit area. According to [31,32], emergence of large groups of interconnected cells, the percolation transition, in a set of randomly distributed circles can be characterised by the mean number of centers that fall within a circle of radius R: In particular, there exists a critical concentration B = B c at which two arbitrary circles become connected with high probability. Thus percolation occurs and a large cluster of connected circles appears. In contrast with typical thermal phase transitions, where a transition between two phases occurs at a critical temperature, percolation transition relates to distribution and topology of clusters corresponding to the values of B in a neighborhood of B c . At low values of B only small clusters of overlapping circles exist. When the concentration B increases the average size of the clusters increases too. At the critical concentration value, B = B c , a large cluster appears; it includes groups of cells that are close to the opposing boundaries of the original square. This cluster is called spanning cluster or percolating cluster. In the thermodynamic limit, i.e. in the infinite size system limit the spanning cluster is called infinite cluster. For scalar problem the value of B c � 1.1.
Case 2. Cells have axons, H > 0, and axons are allowed to transmit signals in both directions. Each neuron can be represented as an undirected pair of head-and tail-circles both having radius R. When the head-circle or the tail-circle of an neuron overlaps with the head-or the tail-circles of another neuron we consider these neurons connected. Despite this setting differs from Case 1 in that we now operate with dipoles rather than with just circles, the problem remains within a class of scalar percolation, albeit for dipoles of circles not just a single circle.
Case 3. Cells have axons, H > 0, and these axons can only transmit signals along a straight line which determines direction of connectivity for a given cell. The coupling direction from soma to synaptic terminal has isotropic distribution, and hence each neuron could be represented as a directed pair of head-and tail-circles both having radius R. Vectors linking centers of the head-and tail-circles are allowed to have arbitrary direction. Their lengths, H, however, are fixed. When the tail-circle of a neuron overlaps with a head-circle of another neuron the pair is considered as connected. In contrast with two other ways of establishing neuronal connectivity considered above this is the most realistic scenario. It is no longer within the scope of simple scalar circle percolation framework but is a vector percolation problem.
The three cases are illustrated with Fig 1. Fig 2 shows dependence of the percolation threshold parameter, B c , on the ratio H/R. In accordance with he definition of B in (1) the cell's concentration variable B can be related to the expected number of neurons that are connected to a randomly chosen query neuron in the system. The latter quantity is known as the average network coordination number z. As can be seen from Fig 2 the value of the coordination number z that corresponds to B c is always bounded from above. B c � 1.4 for all configurations considered so far.
The above analysis reveals that networks with coordination numbers exceeding these critical values are likely to form a spanning cluster that is capable of connecting opposite edges of the system [33,34]. Thus, intuitively, one can argue that signals initiated by spontaneous activation of neurons in the cluster can spark waves of activity through the whole network. Such conclusion is based on the restrictive assumption that neurons always elicit spikes in response to a spike on their input. Not only this assumption does not necessarily hold true, but also the above geometrical model alone does not explain the wealth of excitation propagation phenomena observed in cultures. To account for more plausible situations, as well as to possibly increase explanatory power of the model, two additional variables are introduced: one is the probability p of neuronal activation in response to incoming spike, and the other is an exogenous "resource" variable E determining if a neuron has enough energy to elicit a spike. The consequences of adding these two variables are discussed in the next sections.
Mean-field dynamic model of neuronal excitation. We begin with a simple mean-field approximation of the dynamics of neural excitation in the system. Consider a connected network of neuronal cells. Let z be the expected coordination number, i.e. the expected number of neighbors of a randomly chosen query cell. Suppose that at a time instance t some neurons in the network are excited. Let q t denote the number of these neurons relative to the total number of cells the whole network. If the value of z is sufficiently large then the number of excited neurons among all neighbors of a given neuron can be estimated as zq t . Let p be the probability of neuronal activation in response to the activation of at least one neuron from its nearest neighbours, and suppose that all excitatory signals are independent. Thus the probability that a given neuron is activated equals to 1 À ð1 À pÞ zq t , and hence the expected proportion of all excited neurons at the time step t + 1 is: The range of dynamics which model (2) is capable to reproduce is summarized in Proposition 1.
Proposition 1 Consider (2) and let p 2 (0, 1), z > 0 be constants. Then the interval [0, 1] is forward-invariant, all forward orbits q t are monotone functions t, and the point map (2) has only fixed points as attractors. Furthermore then the map has only one fixed point, q � 1 ¼ 0, and it is an attractor; 2. if −z log(1 − p) > 1 then the map has only two fixed points with q � 1 ¼ 0 being a repeller and the other one, q � 2 2 ð0; 1Þ, a stable attractor. Proof of Proposition 1 is provided in the Appendix. According to Proposition 1, asymptotic mean-field dynamics (3) of the network is a steady activity at an equilibrium in the entire domain of the model's feasible parameters: p 2 (0, 1), z > 0. Moreover, all transients are monotone trajectories. Emergence of the unique non-zero asymptotic activity is fully determined by the values of the connectivity parameter, z, and the probability of neural activation, p. Critical values of these parameters, e.g. the critical connectivity, z 1 , at which the transition occurs, satisfies: For z 1 � 1, this relation is approximately reciprocal: p ' 1=z 1 . Indeed, expressing p ¼ (3), and expanding exp À 1 z 1 � � as a power series with respect to 1=z 1 , one obtains: The value of the stable equilibrium, q � 2 , can be estimated as follows. At the steady state we Observe that the larger the value of the connectivity parameter, z, is, the higher is the level of the mean-field network activity.
Whilst model (3) is consistent with the very basic observation that increasing the network's overall connectivity may lead to emergence of a self-sustained activity in the network, the model's explanatory capability is limited. The model does not explain widely-reported richness of the dynamics in live neuronal cultures, including emergence of spontaneous activity bursts and irregular and seemingly chaotic spikes.
This limitation is not surprising since (2) is a crude approximation of the network's dynamics. Model (2) does not account for a broad spectrum of biological mechanisms involved in spike generation and assumes that the neuron's ability to produce spikes depends exclusively on stimulation. A possible way to overcome this unrealistic assumption is to explicitly account for these missing mechanisms. To keep the model simple, we account for joint effect of these mechanisms by adding a single energy-like variable E t to (2). The new variable determines the neuron/network's ability to produce a spike depending on the amount of resources or "energy" available. Generic models of this type have been proposed and analyzed in [27,35] in the context of adaption to stress and external environmental factors. These models have been shown to capture periodic and irregular behavior in multi-agent systems [28].
Here we extend the original phenomenological mean-field model (2) as follows: where and H is the Heaviside function. In (4) p is the maximal probability of neuronal activation, z > 0 is the coordination number, E t is the exogenous phenomenological "energy resource" variable; r > 0 is the energy cost of neuronal activation, E > 0 and w > 0 are parameters that determine the minimal activation probability and the energy activation threshold, E > E is the energy recovery value, ε 2 (0, 1) is the energy relaxation parameter. The general shape of the function sð�; EÞ in the energy-dependent synaptic efficacy component, psð�; EÞ, is shown in Fig 3. Mean-field model (4) of the network dynamics inherits phenomenological transparency of (3). It does, however, account for generic constraints of spike-generation through the new energy variable E t and energy-modulated synaptic efficacy psð�; EÞ. Despite retaining simplicity, the model produces remarkably rich dynamics. Its equilibria, however, can still be described by just a few parameters as follows from Proposition 2 below. Proposition 2 Consider (4) with p 2 (0, 1). 1. If À z log 1 À ps E; E À � À � � 1 then (4) has only one fixed point: This fixed point is an attractor when À z log 1 À ps E; E À � À � < 1.

2.
If À z log 1 À ps E; E À � À � > 1 and E=r � 1 then (4) has at most two fixed points: fixed point (5) and, if exists, an additional fixed point ðq � 2 ; E � 2 Þ: The fixed point ðq � The fixed point ðq � 1 ; E � 1 Þ is a repeller, and ðq � 3 ; E � 3 Þ, if exists, is a stable attractor. Proof of Proposition 2 is provided in the Appendix. An illustration showing relationships between parameters of the model and emergence of the three different equilibria described in Proposition 2 is provided in Fig 4. The equilibria are shown as white circles. Green lines show curves as functions of q � > 0. According to the first equation of (4), all equilibria of the model with q � 6 ¼ 0 must belong to these curves (see also the proof of Proposition 2 in Appendix). Depending on the value of z, the curves move up and down, and intersect with line segments (shown as red solid lines in Fig 4): These intersections correspond to equilibria (6) and (7), respectively. The equilibria persist over intervals of z, and the greatest lower bounds of these intervals (critical values of z) are: We also note (see the proof of Proposition 2) that equilibria (6) are always above or on the line E = rq (dashed orange line in Fig 4), whereas equilibria (7) are to be below this line.
According to Propositions 1, 2, models (2) and (4) share some similarity. For z sufficiently small, all orbits are attracted to a single equilibrium. At this equilibrium, the systems are silent. When z increases and eventually exceeds the first critical value (Eq (3) for (2) and z 2 for (4)), the silent equilibrium becomes a repeller and the systems start to exhibit non-zero activity. However, further increases of z trigger drastically different dynamics in these models.
All orbits of model (2) with q 0 6 ¼ 0, as ensured by Proposition 1, converge monotonically to a single non-zero steady state regardless of how large the values of z become. The spectrum of orbits in model (4) is different. Our numerical experiments demonstrated that, in addition to equilibria, the model is capable of generating periodic orbits too. Moreover, for a broad  (5) and (6)) are not attracting the orbits, and trajectories appear to be chaotic with some apparent intermittency.
In order to gain additional insight into the model's dynamics, we numerically explored asymptotic regimes of (4) for varying values of z, E, and r. Other parameters were as follows: ε = 0.05, w = 1.5, E ¼ 4, p = 0.1. Outcomes of these experiments are summarized in Figs 6-8 (see Materials and methods for details of the steps taken to produce these figures). In these experiments, the values of E were chosen from a uniform equispaced grid of 21 points in the interval [1,3]. This grid is shown as grey dashed horizontal lines in Figs 6-8. Parameter z was varying adaptively (increments ranged from 0.1 in the intervals (0, 50] and (200, 300] to 5 in the interval (50,200]). For these values of model parameters, we assessed the type of the model's asymptotic dynamics and mapped these onto relevant parametric regions. These regions are shown with different colour in Figs 6-8. Simple model of activity patterns in developing neuronal cultures According to Figs 6-8, development and evolution of the dynamics of (4) follows a robust pattern. For a fixed value of E 2 ½1; 3� and z small, trajectories of the model converge to a unique attractor (fixed point 5). This attractor corresponds to the system's state in which no elements/neurons are excited. When the value of z increases and exceeds z 2 (specified in (9) and shown as blue dashed lines in 6-8), equilibrium (5) becomes a repeller and a second Simple model of activity patterns in developing neuronal cultures equilibrium emerges (fixed point (6). Numerical evaluation of the eigenvalues of the Jacobian at this equilibrium showed that it is locally asymptotically stable. Further increments of z lead to that fixed point (6) loses stability thorough the Neimark-Sacker bifurcation, and an attracting periodic orbit emerges. The boundary of this transition is depicted as green dashed lines in  Emergence of rich activity patterns in neuronal as a direct result of the network connectivity has been investigated in [36]. It has been found in [36] that the generation and spatiotemporal patterns of propagation were most variable in networks with intermediate clustering and lowest in uniform networks. Whilst our model (4) does not allow to discriminate between clustered and uniform connectivity explicitly, Figs 6-8 capture the general connectivity effect on the system's dynamics: rich and complex trajectories occur at the "intermediate" values of connectivity parameter z (see e.g. Fig 8). Figs 6-8 also show that exact bounds of such "intermediate" values of z may depend on the values of other model parameters.
Notice that some of the complex trajectories shown in panel C, 8 eventually converge to the stable equilibrium specified by (7). This is an empirical manifestation of slow relaxations and  (6) is locally asymptotically stable. Green areas are the domains in which an attracting periodic orbit was detected. The red area corresponds to regions where complex chaotic-like dynamics were observed. The black line indicates a boundary beyond which co-existence of multiple attractors was observed consistently in all experiments. Inlet linked to the gray area shows estimates of transition boundaries (dashed blue (z 2 in (9)), green, and black (z 3 in (9)) lines) relative to the ones observed in experiments. The yellow region shows the domain in which all trajectories converged to (7). A,B,C,D: typical dynamics observed in the corresponding parametric regions.
https://doi.org/10.1371/journal.pone.0218304.g008 critical delays in model (4) [37], [38]. For z sufficiently large, these complex orbits disappear and reduce to periodic orbits, Figs 6 and 7, or mere equilibria, Fig 8. Mean-field bursting dynamics, shown e.g. in panels D and E in Figs 6 and 7, resembles that of the population bursts observed in live neuronal evolving cultures. An important factor in successful replication of this behavior was the energy variable, E t , coupled with the energydependent activation probability psðE t ; EÞ. The mean-field model, however, does not capture spatial effects and as such is only a rough approximation of activity propagation in neuronal cultures. In the next section we extend the proposed mean-filed model (4) to a multi-agent network with randomized energy-dependent activation and numerically assess relevant parameters of its dynamics, including distributions of sizes and durations of firing avalanches.
Multi-agent model of neuronal excitation. As a natural extension of (4), we consider a connected network of N neurons. The network's topology is determined by an adjacency matrix, C, whose elements c ij are: if there is a link from the iÀ th node to the jÀ th 0; otherwise: No links from a node to itself are permitted, but cycles are allowed. For simplicity, all links in the network have been assigned equal weights of which the value was assumed to be 1. For the given adjacency matrix C, we determined the average number of inputs, hN in i, and the average number of outputs hN out i Dynamics of each i-th node in the network is described by two variables: the activity variable, q i;t 2 R, and the energy variable, E i;t 2 R. Equations governing evolution of these variables have been defined as follows: The variables q i,t take values in the set {0, 1}, and E i,t are in the interval ½0; E�. The function sð�; EÞ is as in (4).
Phenomenological motivation for the dynamics of individual nodes in model (10) is similar to that of the mean-field model, (4). There are, however, several key differences. The evolution of variables q i,t and E i,t in (10) explicitly accounts for local network topology and is directly driven by activity of the node's neighboring cells (as opposed to mean connectivity z and activity in (4)). The energy balance equation, the second equation in (10), accounts for the costs of transmitting active signals at the neuron's input (term r 2 P N j¼1 c ji q j;t hN in i À 1 ) and generating activity signals on the neuron's output (term r 1 P N j¼1 c ij hN out i À 1 ). If the node's energy level is insufficient to trigger a spike, E i;t � E fire i;t , then no spikes are generated at t+ 1. The latter property is difficult to fully capture at the level of the mean-field approximation, as low subthreshold values of the bulk energy do not necessarily imply absence of activity at the level of individual neurons (cf. Proposition 2, alternative 3, and Fig 4).
In our numerical experiments, we focused largely on fully connected networks for which c ij = 1 − δ ij , where δ ij is the Kronecker's delta. Addition of a fraction of inhibitory connections did not result in qualitative changes in the network's dynamics. These simplifications are consistent with the approaches and observations reported in earlier works [23]. The model parameters where set as follows: and parameters r 1 and r 2 varied in the intervals [1, 1.5] and [4,6], respectively.
We simulated forward orbits of model (10) for various initial conditions and parameter values, and determined sizes and durations of avalanches of firing events. In our experiments, the avalanches were defined as events corresponding to the intervals T j = [t j , t j+1 ] of the network nonzero firing activity such that  (10). As we can see from this figure, energy feedback has the capacity to inhibit system-size events in networks sharing the same graph topologies, and parameters of this feedback may affect the exponents of size and duration histograms. In particular, we observed that increasing the values of ε lead to increases of system-size events and pushing the system eventually into the super-critical state. Fig 10 shows statistics and estimated exponents of avalanches for r 2 = 5.5, r 1 = 1.1. The estimated exponents are close to those reported for live neuronal cultures [7], [8], [39]. Observe that the size and duration curves have noticeable humps (cf. [39], Fig 2; [40], Figs 3-6), albeit different and less prominent exponents than those reported in [39].
Activity of individual nodes, as a function of t, appeared to be synchronized with the peaks of their corresponding energy variables. Similar dependency has been observed in the meanfield approximation too. These observations suggest that exogenous energy may be a relevant factor in understanding dynamics of neural networks. Complementary to connectivity-activity relations revealed in e.g. [17], [23], here we show that energy balance may modulate dynamics of activity patterns in the network. Indeed, in our experiments the network was fully connected and as such its connectivity was always above the network's percolation threshold. Yet, as Fig 9 illustrates, occurrences and parameters of system-size events in such systems can be controlled by energy balance within individual nodes. This gives rise to functional dynamical clustering in the model, as opposed to clustering induced merely by network connectivity.

Modelling dynamics observed in neuronal cultures
The mean-filed and multi-agent models introduced and investigated in the previous sections show a range of behaviors which are controlled by just few parameters. These parameters, in turn, may be related to physical quantities and variables such as the connectivity density parameter, thresholds, energy balance parameters (costs of spike generation and transmission), and energy recovery times. Figs 6-8 (see also the online repository [41] containing extended simulation results) show different dynamical modes and behaviors corresponding to these parameters and their combinations. For a broad range of parameters (red areas in Figs 6-8), dynamics of the activity variable q t in (4) bears qualitative similarity to some of the patterns  observed in evolving neuronal cultures at different development stages [42], [43]: tiny bursts, fixed size bursts, variably sized bursts, and superbursts.
In addition to patterns in model trajectories, we looked at statistics of the Array-wide Spike Detection Rates (ASDR) and Burstiness Indexes (BI) over various days in development (DIV) [42] and compared it with that of the values of ASDR and BI derived from the data generated by our model (see Methods for details). To relate empirical observations to our model, we assumed that the model coordination number z is representative of DIV for cultures. Results are shown in Fig 11. Similarly to how the median ASDRs change in real cultures with DIV (Fig 4, Panel A, [42]), the sample average of the model's median ASDRs first grows and then decreases with the growth of the model connectivity parameter z. Note that the initial growth phase is in agreement with the supercritical Neimark-Sacker bifurcation of the stable equilibrium of (4) (Proposition 2 and discussion afterwards). Moreover, the sample average of the model's BI as a function of z (Fig 11, bottom panel) shows a development trend that is very similar to the one observed in real neuronal cultures (Fig 4, Panel B, [42]).
Last but not least, the proposed model enables explicit modelling of the influence of external metabolic factors on the overall activity in the cultures. This is an advantage over networks of classical conductance-based models like e.g. Hodgkin-Huxley, Hindmarsh-Rose, and Fitzhugh-Nagumo equations [44].
To demonstrate this possibility we modelled the network's activity during and after acute but short oxygen deprivation as reported in [45]. In [45], primary cultures of hippocampal cells were subjected to 10min of acute oxygen deprivation in the 21st DIV. Reoxygenation after short-term hypoxia rapidly restores energy deficit and neuronal ATP levels and increases the release of glutamate. Glutamate is a major excitatory neurotransmitter in mammalian central nervous system. Glutamatergic neurons form the main excitatory system in the brain and play a pivotal role in many neurophysiological functions [46]. Excessive glutamate releasing is homeostatic response to the hypoxia-induced network silence [47]. It is directed at restoration of network activity and results in an over-activation of ATP-dependent ion pumps and bioenergetic-dependent network burst. However, excessive glutamate releasing over-activates its receptors and changes calcium homeostasis that in turn leads to a cascade of intracellular events causing neuronal degeneration and referred to as excitotoxicity [48], [49].
To simulate, albeit qualitatively, changes in firing dynamics caused by the hypoxia as well as by the sequence of complex biological of changes related to oxygen deprivation, the following numerical experiments we performed. The model, Eq (4), with r = 1.5, z = 170, ε = 0.05, w = 1.5, E ¼ 2, E ¼ 4, p = 0.1 was iterated for 1500 steps. Then for the next 1000 steps the value of E was reduced to 0.4 and then restored back to the nominal level of 4 for t > 2500. This modelled energy deficit caused by acute oxygen deprivation. At t = 2500, however, the value of p was increased to 0.15 to account for glutamate release, and then dropped to the level p = 0.07 in the interval [2700, 5000] to emulate glutamate-induced suppression [50]. For t > 5000 the value of p was made to decrease linearly to account for degenerative processes triggered by oxygen deprivation. Model behaviour as well as the evolution of p and E over time are shown in Fig 12. Overall activity levels, which the model shows in this regime, are in qualitative agreement with empirical observations reported in [45].

Construction of Figs 6-8
To construct the figures, we run 3 � 10 5 iterations of model (4) from 5 random initial conditions for each relevant set of parameters E, z, r, whilst keeping the values of w; p; E, and ε constant  [1,3]. The values of z were varying adaptively. In the intervals (0, 50] and (200, 300] these values were taken from equispaced grids with distances between the grid's nodes being equal to 0.1. In the interval (50,200] these distances were set to 5. For each set of model parameters and each initial condition, last 2 � 10 4 points in each run have been recorded. Points corresponding to orbits from different initial conditions have been collated together (color-coded), plotted in (q t , E t ) space and stored as .gif files at [41]. This resulted in circa 32000 images for r = 1 and r = 1.5, and circa 10000 images for r = 10. The resulting figures were visually inspected and classified into orbits converging to a) single equilibrium (5), b) single equilibrium (6) c) single periodic orbit, d) complicated sets like the ones shown in Fig  5, and e) multiple attractors. These were color-coded and mapped onto the relevant parametric domains.

Construction of Fig 11
To model ASDR and BI data, we generated 60 × 86 orbits of (4) for different values of z, r, p, and E. The values of p, r, and E were chosen randomly (60 samples in total) in the intervals [1, 0.11], [1.5, 2] and [2, 2.5], respectively. Randomized choice of p, r, and E simulated fluctuations in neuronal excitability and in the energy balance dynamics across cultures; increments of the values of z modelled development of the network connectivity with the culture's age or DIV. Each triple of fixed but randomly chosen p, r, E was assigned 86 equally spaced values of z in the interval [10,180]. Such model parametrization enabled us to simulate an ensemble of neuronal cultures capturing culture-to-culture variability as well as the cultures' development over time.
For each triple of p, r, E and every z, we generated an orbit of (4) comprising of 2 � 10 5 iterates, and the orbits' segments composed of the last 3 � 10 4 elements were used in the calculations. For each segment, we calculated the model's analogue of the ASDR, as well as the model's BI. The model's ASDR was defined as ASDR t ¼ ðq t À yÞHðq t À yÞ; where the value of threshold θ was set to 1/0.6745 of the median of q t in the segment (cf. [51]). The model's BI was determined in accordance with the approach presented in [42]. The entire segment of the orbit was partitioned into adjacent and non-overlapping bins, each containing k consecutive measurements of ASDR t . In our numerical experiments, we set k = 30. For each bin, we calculated the sum of ASDR t in the bin. This was followed by determining the fraction of the total activity contained in the top m% of the bins. This fraction was recorded as the variable f m . The model's BI was then defined as [42]: Since the average width of ASDR t spikes in the model was about 10 steps, the value of m was set to 50 (cf. [42]).

Conclusion
In summary, we have proposed a simple network model explaining burst generation in living culture networks. A distinct feature of our model is presence of a dynamic exogenous energy variable and neuronal activation probability that is made dependent on the energy, like in general models of physiological adaptation [28]. We showed that introduction of these modifications already enables to explain evolution of cultures from resting state to population bursts, at least in the mean-field approximation. In accordance to the model, emergence of bursts and spikes is regulated by just few parameters that correspond to network connectivity and efficacy of synaptic transmission. We also note that our energy-based model is complementary to more traditional connectivity-focused approaches [23].
In this particular study, when comparing empirical data with model behavior, the number of days in development has been related to network connectivity. We note, however, that the latter in a broader biological context, can depend on various external factors such as e.g. stress [52]. The proposed model hence might be able to predict qualitatively the effect of stress and adaptation to stress on neuronal activity.
Large-scale multi-agent simulations demonstrated that these additional variables are capable of governing the network's dynamical state and keeping it at the edge of percolation transition, depending on parameters. The energy feedback acts as as a mechanism for controlling and maintaining metabolic homeostasis; this enables communication between nodes across the whole network and at the same time prevents network's overload caused by excessive propagation of activity. The energy feedback suppresses excitation in individual neurons by disabling, in effect, high-frequency local spike generation. It can therefore be also viewed as an implementation of frequency dependent synaptic plasticity [17].
Despite qualitatively explaining certain phenomena observed in neuronal cultures, the model is simplistic. It does not account for varying strengths of synaptic efficacy, plasticity mechanisms and their time scales, and density of cells. It also may be interesting to look at and thoroughly investigate the distributions of neural firing rates for different values of model parameters and relate these distributions to empirical evidence reported in the literature [53]. Accounting for these is the subject of our future work.

Ethics statement
No animals or human subjects were used or employed as a part of research presented in this work.

Proof of Proposition 1
Notice that a = (1 − p) z 2 (0, 1) for all p 2 (0, 1), z > 0. Thus 1 À a q t 2 ½0; 1� for all q t 2 [0, 1], and forward invariance of [0, 1] follows. The right-hand side of (2) is continuous and strictly monotone with respect to q t on (0, 1], with q t = 0 being an equilibrium. Hence all forward orbits of this map, i.e. q t , q t+1 , q t+2 , . . . are monotone, and map (2) has only fixed points as attractors. Furthermore, the right-hand side of (2) is strictly concave, which implies that the number of fixed points is at most two.
If the value of p is such that the right derivative of f ðq t Þ ¼ 1 À ð1 À pÞ zq t q t = 0 is less or equal to 1 then strict concavity of f(�) implies that f(q t ) < q t for all q t 2 (0, 1]. Hence (2) has only one fixed point, q t = 0. The corresponding condition is −z log(1 − p) � 1. This fixed point is attracting: lim t ! 1 q t = 0. If −z log(1 − p) > 1 then the trivial equilibrium q t = 0 becomes a repeller and the second fixed point q t ¼q,q 2 ð0; 1Þ appears. At this point, the line y = q and the curve y = f(q) = 1 − (1 − p) zq intersect transversely. Indeed, if this is not the case then there is a point q 0 > 0 such that 1 À ð1 À pÞ zq 0 ¼ 0 (see Fig 13, left panel). The latter, however, is impossible as (1 − p) 2 (0, 1). Moreover, at the point of this intersection, the slope of the curve y = f(q) = 1−(1 − p) zq is always strictly smaller than one (see Fig 13,right panel). In order to see this, recall that the following must hold at q ¼q: implies that q t 2 [0, 1] for all t regardless of the values of E t . Let E t 2 ½0; E� and E t < rq t . In this case, E tþ1 ¼ ð1 À εÞE t þ εE 2 ½0; E�. If E t 2 ½0; E� and E t � rq t , then Thus the domain fðq; EÞ jq 2 ½0; 1�; E 2 ½0; E�g is forward-invariant. Statement 1. Observe, that q � 1 ¼ 0, E � 1 ¼ E is always an equilibrium of (4). At this equilibrium, E ¼ E � 1 � rq � 1 ¼ 0, and hence the Jacobian Jðq � 1 ; E � 1 Þ of the right-hand side of (4) at this equilibrium is: It is clear that the eigenvalues of Jð0; EÞ are l 1 ¼ À z logð1 À psðE; EÞÞ, λ 2 = 1 − ε. Therefore, the fixed point ðq � 1 ; E � 1 Þ is an attractor when À z logð1 À psðE; EÞÞ < 1 and is a repeller when À z logð1 À psðE; EÞÞ > 1. This proves statement 1 of the proposition. Statements 2,3. Let (q � , E � ) with q � 6 ¼ 0 be another equilibrium of (4). All such equilibria of (4) must satisfy Depending on the sign of E � − rq � , the above system splits into the following two cases: Let gð�Þ ¼ sð�; EÞ. Note that the function gð�Þ : R ! ð0; 1Þ is continuous and strictly increasing for w > 0. Hence g À 1 ð�Þ : ð0; 1Þ ! R exists, and is continuous and strictly increasing too. Moreover, points (q � , E � ) satisfying (11) should satisfy conditions below (and viceversa): Consider the functions Let O be the domain of the definition of the function h(�). If a solution of (12) exists with q � 2 (0, 1) then the intersection ð0; qÞ ¼ O \ ð0; 1Þ must be non-empty. The function f(�) is continuous on (0, 1), and its first derivative, where [ � ] stands for the corresponding entry of the Jacobian matrix. The absolute values of its eigenvalues are clearly less than 1, and hence the fixed point is a stable attractor.