Complexity matching in neural networks

In the wide literature on the brain and neural network dynamics the notion of criticality is being adopted by an increasing number of researchers, with no general agreement on its theoretical definition, but with consensus that criticality makes the brain very sensitive to external stimuli. We adopt the complexity matching principle that the maximal efficiency of communication between two complex networks is realized when both of them are at criticality. We use this principle to establish the value of the neuronal interaction strength at which criticality occurs, yielding a perfect agreement with the adoption of temporal complexity as criticality indicator. The emergence of a scale-free distribution of avalanche size is proved to occur in a supercritical regime. We use an integrate-and-fire model where the randomness of each neuron is only due to the random choice of a new initial condition after firing. The new model shares with that proposed by Izikevich the property of generating excessive periodicity, and with it the annihilation of temporal complexity at supercritical values of the interaction strength. We find that the concentration of inhibitory links can be used as a control parameter and that for a sufficiently large concentration of inhibitory links criticality is recovered again. Finally, we show that the response of a neural network at criticality to a harmonic stimulus is very weak, in accordance with the complexity matching principle.


Introduction
The problem of the response of a complex system to external stimuli is of central interest for brain dynamics. The main purpose of the recent work of [1] was to prove that weak global harmonic perturbations can selectively enhance oscillations corresponding to the frequency of the applied stimulus. However, this goal requires a cautionary note based on the fact that the brain as a generator of ideal f 1 noise [2] does not have a predominant frequency, which explains why the authors of [1] found that their complex system is not very sensitive to harmonic stimuli. Another interesting recent result was found by the authors of of [3] showing that a rat B looking for food in a box may drive the motion of a rat A, in a different box, when the information gathered by a few electrodes implanted on the brain of rat B is transmitted to electrodes implanted in the brain of rat A. This experiment can be interpreted to some extent as a synchronization between two distinct complex systems, more specifically, the brains of two rats. We interpret the results of [1] and [3] as a clear indication that the brain, which is not sensitive to harmonic stimuli, can be efficiently driven by stimuli produced by another brain.
These inspiring results confirm the recently made conjecture [4][5][6] that the transfer of information from one network to another requires that the two networks have matching complexities. To fully appreciate the importance of these properties we must first of all stress that they refer to a sequence of events, occurring at different times, with the time distance τ between two consecutive events being derived from a distribution density ψ τ ( ), with the following time-asymptotic structure Let us assign to the time distance between the ith event and the − i ( 1)th event the value τ i . We assume that the time distance between the + i ( 1)th event and the ith event, τ + i 1 , does not have any memory of τ i . This condition is referred to as the renewal condition. Events fitting this condition are called renewal events. A sequence of renewal events generated by a system at criticality is denoted as temporal complexity. The authors of [7] studied the interval μ < < 1 3 and found that the spectrum P( f ) generated by these fluctuations is given by with η μ = − 3 . (3) The condition η < 1 can be generated also by events that are not renewal [7]. The condition η > 1, on the contrary, is incompatible with action of non-renewal events [7]. The condition η = 1 is adopted by the healthy brain [2] and is thought to represent the ideal form of temporal complexity insofar as η approaching 0 is a sign of uncorrelated fluctuation, whereas η approaching 2 seems to indicate a high level of predictability [4]. The renewal condition is assessed using the aging experiment method designed by Bianco et al [8] to prove that the intermittent fluorescence of new nano-materials, called blinking quantum dots, is generated by renewal events. The same aging experiment method was used to prove that a set of cooperating dipoles at criticality generate renewal fluctuations [9]. The aging experiment method was recently adopted to study cooperation in neural systems [10]. It was used to prove that cooperation between neurons generates a phase transition to a process that, for large values of the cooperation parameter, becomes periodic after crossing an intermediate nonrenewal regime, and to confirm that at criticality the process is renewal.
The authors of [11] found that two complex systems that mimic the correlated dynamics of decision-making social systems synchronize most closely when both the driving and driven systems are poised at criticality, namely, in the condition signaling the transition from a state wherein the dynamics of the social units is uncorrelated to a state in which the units are highly correlated, thereby yielding the emergence of ideal f 1 noise. The more recent work of [12] sheds light into the origin of this form of cooperation-induced synchronization and proves that it rests on the key role of free will states, i.e., criticality-induced renewal events. The similarity between the numerical experiments of [11,12] and the real neurophysiological experiment of [3] is of great interest and may be made even more attractive by showing, as we do in this paper, that a qualitatively similar result is obtained by using a dynamical model that mimics the behavior of interacting spiking neurons rather than the cooperative interaction of social units.
The criticality-induced synchronization of [11,12] conjures up a theoretical perspective based on the special role of complexity, understood as a condition intermediate between order and randomness, as the optimal condition for the brain. As a matter of fact, an increasing number of investigators are adopting the hypothesis of criticality to explain brain function: the brain is a complex system operating at criticality [13]. The increasing popularity of this perspective is founded on its capability to explain the spatial and temporal correlation recently revealed by the use of functional magnetic resonance imaging [14]. We refer the readers to the excellent book edited by Plenz and Niebur [15]. Criticality is the result of a balance between excitatory and inhibitory links, with inhibition playing the crucial role of maintaining the brain in a resilient condition, namely, at criticality. The authors of [16] discussed the role of inhibitory links that may promote self-sustained activity and criticality together.
We are convinced, however, that there is not yet a clear understanding of the kind of criticality behind brain dynamics and, more generally, behind the dynamics of neural networks. The most popular indicator of criticality in neural systems is given by the distribution of the number of neurons that fire together. The critical state is signaled by neuron-firing avalanches with a size distribution density ∝ p s s ( ) 1 1.5 [17]. Another indicator of criticality is given by long-range correlations [18]. As mentioned earlier, a further important indicator is temporal complexity. In the specific case of neural dynamics this indicator rests on evaluating the time distance between two consecutive firings in the neural systems. As discussed in the earlier work of [10,19,20], at criticality, the time distance between two consecutive events fits the prescription of equation (1), with the survival probabilityΨ t ( ) corresponding to the waiting time distribution density ψ t ( ) of equation (1) taking the specific shape of a Mittag-Leffler (ML) function (see section 4 for more information about this form of survival probability). The events have been proven to be renewal by using the aging experiment method [10,20]. In this article we take the renewal condition for granted and we assume that the emergence of ML survival probability is a sign of temporal complexity.
In this paper we also use the complexity matching principle as criticality indicator. As done in the real experiment of [3], we use a neural network B to drive the identical network A. A few neurons of A, corresponding to the electrodes implanted in the brain of rat A are forced to move as a few neurons of B, corresponding to the electrodes implanted in the brain of rat A. Then we change the control parameter k of both networks and we establish the condition at which the cross-correlation between the two networks becomes maximal. We make the assumption that the value of k determined by means of this numerical experiment corresponds to the critical state of the system. This definition of criticality leads us to the same value of k as that established by using temporal complexity as criticality indicator. Temporal complexity and long-range correlation have been shown to lead to the same value of the control parameter k in the earlier work of [18]. What about the distribution density of the avalanche sizes? We find that the adoption of avalanche size as indicator of criticality leads to a value that corresponds to the supercritical regime, a result confirming the observation of the earlier work of [20]. In this supercritical condition the renewal character of neuron firings is lost, and a bridge between temporal complexity and periodicity is established [10]. This result casts doubt on the equivalence between temporal complexity and self organized criticality (SOC). We shall see, however, that the results of our analysis account very well for the experimental observation of [1], the emergence of 1/f noise, the transfer of information from one brain to another [3] and the beneficial role of inhibitory links [16]. We think that the results of this article may also lead to important medical applications because, as we shall see hereby, there exists a close connection between the results of the numerical simulations of this article and the phenomenon of complexity matching [4][5][6], which may be used to design efficient non-invasive stimuli.
The outline of the paper is as follows. In section 2 we compare the model studied in this article to the integrate-and-fire model that has been adopted by our group in earlier work [20]. The main purpose is to recover qualitatively similar results, in spite of the fact that here randomness is determined by the choice of the initial state of a neuron after firing rather than by the stochastic force affecting the motion of each neuron. Section 3 shows that the behavior of a single neuron is not significantly affected by the interaction with the others whereas the distance between consecutive firings of the whole networks is a very sensitive indicator of the cooperation between the neurons. Section 4 affords an intuitive explanation of why cooperation is expected to make the exponential time distance between two consecutive global firings turn into a stretched exponential form, and thence into a waiting-time distribution with an extended inverse power law tail. In section 5 we show that the correlation between two identical networks becomes maximal at criticality. The adoption of avalanche size as the indicator of criticality, on the contrary, would not lead to maximal correlation between driven and driving network, an important property that is illustrated in section 6. Section 7 shows that increasing the number of inhibitory links has the effect of shifting criticality to higher and higher values of the interaction strength, while leaving unchanged the signatures of criticality. In section 8 we show that the response of a complex network to a harmonic stimulus is greatly reduced compared to the complexity matching condition, although the system at criticality also best perceives the harmonic stimulus. We devote section 9 to concluding remarks.

Earlier work
The model adopted in this article belongs to the class of leaky integrate-and-fire models [21][22][23] that are described by where k is the control parameter and L i j , describes the coupling between neurons. Note that = i N 1, , with N being the total number of neurons. The function F(t) is a Dirac delta-function. Each neuron moves along the xaxis starting at the rest state x = 0 and fires when it reaches the threshold x = 1. When the neuron fires it forces all the neurons linked to it to make a step ahead or backward by the quantity k according to whether = L 1 ij (excitatory) or = − L 1 ij (inhibitory). The parameter k plays the all-important role of control parameter and is expected to generate criticality when the special value k C , to be determined using a suitable criticality indicator, is adopted. After firing, each neuron jumps back to the rest state x = 0. The stochastic force σξ = f t t ( ) ( ), adopted in the earlier work of [19,20], is a random process of intensity 〈 〉 f 2 = σ 2 with ξ = ±1 determined by a coin tossing process. In this earlier work complexity emerges from the balance between disorder, generated by the stocastic force, and order, produced by > k 0, with all the links belonging to the excitatory class. In fact, as discussed in the pioneering work of Mirollo and Strogatz [24], with a vanishing stochastic force the model generates a periodic process with all the neurons firing at the same time with the time period being S

MS
Note that T MS is the time it takes a neuron to reach the threshold moving from the initial condition = x (0) 0. Thus, it is evident that when > k 0 the neurons will be forced toward the synchronization condition, where the whole set of neurons would have a periodic motion with period T MS . When k = 0, as an effect of the randomness, the time distance between two consecutive firings is given by the Poisson distribution density MS In this article we study another source of containment of order, the action of inhibitory links. We shall see that the balance between order and random-induced Poisson statistics requires higher and higher values of the interaction strength parameter k as the number of inhibitory links is increased.

Random choice of the initial condition
As mentioned in section 1, we plan to prove that an efficient stimulus of a complex network must match its complexity, thereby explaining the modest sensitivity of a complex network to harmonic stimuli [1]. To make our treatment as close as possible to the spirit of [1], which rests on Izikevitchʼs model [25], we made some numerical calculations based on the prescriptions of that model. The firing behavior illustrated in [25] shows that this model yields a periodic or periodic-like behavior. According to our concept of complexity, a periodic behavior is not complex. A totally disordered sequence of firing, i.e., a Poisson sequence, where the waiting time distribution density ψ t ( ) is exponential like the function of equation (6), is not complex either. We think that the waiting time distribution density corresponding to the ML survival probability of section 4 is a proper representation of temporal complexity. The results of these computer calculations, not shown here, led us to recover the periodic or quasi-periodic behavior of [25] thereby forcing us to conclude that a large number of inhibitory links has to be used to bring the system into the state of temporal complexity that we conjecture to properly describe the dynamics of the brain. On the other hand, the use of the model of [25] is not suitable for the theoretical discussion of this article because being much more realistic than the model adopted in this paper makes it difficult to appreciate the role of disorder generated by the choice of random initial condition rather than by the fluctuations ξ t ( ) i on the right-hand side of equation (4). For this reason we decided to adopt a simplified version of Izikevitchʼs model that is very similar to the model used in the earlier work of our group, equation (4), while replacing the randomness generated by the fluctuations ξ t ( ) i with the disorder created by the random choice of initial condition after firing.
The equation of motion of a single neuron becomes remarkably simple: for the motion from a given initial condition to x = 1 with a random choice of the new initial condition the distribution density of which is p(x). Figure 1 shows the resulting time evolution of a single non-interacting neuron.
The time it takes a single free neuron to reach the threshold moving from the initial position x is which establishes the following relation between the initial position and the time t to obtain the threshold The distribution ψ t ( ) of these times is obtained using In the case = p x ( ) 1, we obtain, after some algebra Note that to ensure that x t ( ) reaches the threshold x = 1 we must set the condition Plugging equation (10) into equation (12) we obtain t which is properly normalized in the time interval We notice thatT max coincides withT MS , which in the case of the stochastic integrate-and-fire model establishes the time period of a system of cooperating neurons in the supercritical regime corresponding to perfect synchronization.
It is simple to prove that if = p x ( ) 1the mean waiting time τ 〈 〉, for a single neuron, with no interaction with the others is namelyT 2 MS in the limit of very small values of the damping coefficient γ. The choice of a uniform distribution of initial values of x has the effect of creating a significant departure from the earlier model, with an enhancement of periodicity. This is the reason why the adoption of the Izikevitch model generated periodic effects that were too strong, thereby requiring the adoption of a large number of inhibitory links. To reduce this effect we assume x This is a normalized distribution of initial conditions that with increasing β will tend towards the condition where all the neurons select x = 0 after firing. The waiting-time distribution corresponding to equation (18) reads

Individual and global survival probability
To assess the emergence of temporal complexity we proceed as follows. We record the time of occurrence of neuron firings. At each time step there may be no firing or the firing of one or more (up to N) neurons. The firing of one or more neurons is an event that is recorded. Then we evaluate the time distance between two consecutive events, the corresponding waiting-time distribution ψ t ( ) and survival probabilityΨ t ( ).
There is a big difference between the survival probability of a single neuron and the survival probability of the whole neural network. As mentioned earlier, we fix the parameters of the leaky integrate-and-fire model of this paper in such a way as to make the mean distance between two consecutive firings of a non-interacting network very close toT MS . As a consequence, a typical survival probability of a single neuron looks like that illustrated in figure 3. When the interaction between the neurons is switched on, the firing of a single neuron is not significantly affected for interaction strengths k comparable to the critical value.
The survival probability of the whole network, on the contrary, shows a significant dependence on k, as shown in figure 4. The reason for this dependence is that in the absence of interaction, k = 0, the process is totally dominated by randomness. Regardless of whether the network is initially assigned a random distribution of firing times or not, as a consequence of the random nature of the single neurons, the final steady state condition is characterized by a Poisson distribution density exp (20) 0 0 and by the corresponding survival probability 0 where τ is the mean time between firings of a given neuron. Note that throughout this paper we use 0.0005, 0.000 500 05, 1.  The motivation for equation (22) is evident. This is the rate of event occurrence with N neurons. The Poisson nature of equations (20) and (21) is due to the total lack of interaction. When k is large enough so as to make all the neurons fire at the same time the global survival probability is expected to become identical to the survival probability of a single neuron. This expectation is fulfilled by the numerical results of this article, as the reader can easily assess by comparing the global survival probability of figure 4 with = k 0.028 to the single neuron survival probability of figure 3. As pointed out in section 1, temporal complexity is a special condition in between the order of all the neurons firing together and the disorder of a Poisson-like time distance between two consecutive firings in the absence of interaction. We adopt the procedure of the earlier work of [19,20] to establish the nature of the phase transition from the regime of uncorrelated to cooperation-induced correlated dynamics.

Temporal complexity as an emergent property at criticality
The correct interpretation of the criticality found in the brain and the criticality of complex systems in general is dependent on the establishment of the proper signatures of criticality. On the basis of the treatment of [26] most of the advocates of the concept of criticality in the brain adopt as a signature of criticality the emergence of avalanches with inverse power law distributions, the inverse power index ν = 3 2 for size, and the inverse power index ν = 1 for time duration.
We adopt a different criticality indicator. This indicator is temporal, thereby explaining why it is called a temporal complexity indicator. More specifically, we follow the authors of [19,20] who have studied the distribution density of the time distances between two consecutive firings as a possible indicator of criticality, characterized by the transition from the exponential to the ML regime. The ML function [27] is a generalization of the ordinary exponential function which establishes a bridge between the stretched exponential and the inverse power law. The hypothesis that this may be a proper indicator of criticality is supported by the following arguments. In the absence of cooperation a set of neurons generates the Poisson sequence of firings of equation (20), with only one neuron firing at a time. This is so because the probability that two neurons fire at the same time, in the absence of cooperation, is negligibly small. As a result of cooperation the firing of a neuron triggers the firing of other neurons and when the cooperation strength is large enough, the firing sequence becomes periodic and all the neurons fire at the same time [24]. Temporal complexity is a condition intermediate between the Poisson and periodic conditions. When the cooperation strength is not yet large enough so as to establish perfect synchronization, we observe a small deviation from the exponential behavior. The neurons that will contribute to the same burst of firing neurons diminish the distance between their firings, whereas the neurons that will contribute to different bursts tend to increase the distance between their respective firings. The stretched exponential function with α < 1, is a very natural way to generate this condition. In fact, if λ ≈ g 0 , in the short-time region and in the long-time region thereby simulating the emergence of distinct firing bursts, with time distances between two consecutive firings in the same burst shorter than in the exponential condition, and larger for consecutive neuron firings that contribute to distinct firing bursts.

Fitting procedure
The ML function at short time is a stretched exponential. As a consequence, the earlier heuristic arguments can be interpreted as a justification of the criticality-induced emergence of the ML function. As is well known, the Laplace transform of Ψ(t),  is a ML function is given by 1 As done in the earlier work [19,20] we fit the Laplace transform of the numerical results to determine λ and α. Note that as a consequence of using an interaction strength k of moderate intensity, the emergence of complexity occurs in the presence of significant noise that has the effect of truncating the long-time tails of the ML survival probability. The work of [28] shows that a convenient way of simulating the effect of noise is to replace the ML function Ψ(t) with Ψ ϵ − t t ( ) exp ( ), thereby replacing equation (28) with 1 We follow the procedure adopted in [20]. The result of the fitting procedure based on equation (28) is equivalent to the adoption of the algorithm of [29]. To take into account that the cooperation strength generating temporal complexity is not yet strong enough so as to make the influence of randomness vanish, we adopt a more refined fitting procedure, equivalent to supplement the fitting parameters λ and α with the additional parameter ϵ of equation (29). This is accomplished by running a modified version of the algorithm of [30]. The latter procedure, of course, generates a better agreement between analytical fitting functions and numerical results. The results of these procedures are illustrated by figures 5 and 6.

Detection of the critical interaction strength with no inhibitory links
In this section we illustrate some aspects of temporal complexity that help us to establish the occurrence of criticality at = − k 3.3 · 10 C 7 , which will be proved later to correspond to the condition where the neural network becomes maximally sensitive to external stimuli. We notice that the second derivative of α k ( ) derived from the fitting procedure corresponding to equation (29) reveals a marked maximum at = − k 3.3 · 10 C 7 . This is shown in figure 7. We interpret this value as the manifestation of criticality. It is interesting to notice that the value of α at criticality is close to α = 0.95, as shown in figure 8. On the other hand, the ML function with α = 0.95 has a tail t 1 0.95 and thus inverse power law behavior of equation (1) with μ = 1.95. Using the theoretical prescription of equation (3) we reach the conclusion that this condition generates the spectral noise of equation (2) with η = 1.05, which is remarkably close to the ideal f 1 noise that is judged to be a distinctive property of the brain [2].   In figure 9 we illustrate the parameter λ as a function of k and we find again that at about = − k 3.5 · 10 7 a change of regime occurs. Figure 10 for the readerʼs convenience shows α k ( ) and λ k ( ) together. To shed further light into the change of regime occurring at = = − k k 3 · 10 C 7 we evaluated the firing density as a function of k. The results are illustrated in figure 11. We see that in the subcritical regime the firing density is a linear function of k. In the supercritical regime the increase of firing density seems to saturate for very large values of k after an initial increase faster than in the subcritical regime. At the moment of writing this paper we do not have yet a theoretical understanding of the change of regime occurring at k C . However, we notice that at the onset of this change of regime the temporal complexity, under the form of ML survival probability distinctly appears. In fact, we are tempted to interpret the results of figure 8 as a second-order phase transition, made smooth by the action of a finite rather than infinite number of units, with α moving from the exponential condition α = 1 to the distinctly smaller value of α = 0.95.

Criticality-induced correlation
Let us now address the important issue of information transport from one network to another. We consider two identical networks that should play the role of the rats B and A of the recent experiment of [3]. The two networks are identical, in the sense that they have the same k and the same number of nodes. The coupling of A to B is  realized as follows. We select randomly a subset ΔA of the neurons of the network A and a subset ΔB of neurons of B. Each neuron of the subset ΔA is coupled to a neuron of the subset ΔB and is forced to adopt its potential x.
Experiments of the same kind were done in earlier work. In [31] the problem was addressed by studying the cross-correlation between the mean field of the driving network and the mean field of the driven network. The authors of the earlier work of [20] used both the mutual information and cross-correlation methods. However, mutual information and cross-correlation were proved to provide qualitatively similar results [12,20].
We study the correlation of the network A with B by means of the correlation function The calculation is done at a given time t over a time interval of the order of about 10 5 or 10 6 timesteps (of size1) and its results do not change with increasing t. The symbols X i andY i denote the membrane potentials of the neurons of the networks S and P, respectively. The sum runs from i = 1 to i = N, where N is the total number of neurons of each network, and the result is independent of the order adopted to identify the neurons. The symbols X andȲ denote the mean membrane potential of the networks A and B, respectively. The result of this numerical experiment is illustrated by figure 12. We see that the maximal value of the crosscorrelation C is very close to the critical value k C . It is worth remarking that at k = 0 (not shown) the crosscorrelation C vanishes, as it should, and is expected to vanish also for values of k much larger than = − k 3.3 · 10 C 7 . We do not have yet theoretical arguments to explain why C has a small negative minimum  < k k C and one at > k k C . We limit ourselves to point out that the numerical experiment of figure 12 was done with the perturbing network in the steady condition and the perturbed in the transient regime towards the steady state. Less accurate results, not shown here, with both systems in the steady state yield > C 0 throughout the whole range.
Let us now go back to the important issue of defining criticality through complexity matching. On the basis of the numerical results of figure 12 and of the complexity matching principle we conclude that = is the critical value of the control parameter k. We have to reverse the order of illustration of the numerical results. We go back to figures 11, 10, 9 and 8. All these figures share the same property of revealing a strong change of behavior around = − k 3.3 · 10 C 7 . It is important to notice that figures 10, 9 and 8 refer to the observation of the time distance between two consecutive firings, with the survival probability being illustrated by figure 4. The conclusion of this is that the condition of maximal synchronization between two neural networks with the same complexity, at = = − k k 3.3 · 10 C 7 , corresponds to the survival probability of figure 4 making a transition from the Poisson regime, where cooperation is not yet perceived, to a regime showing the first signs of the periodicity generated by the simultaneous firing of many neurons. This is the main reason why we claim the importance of adopting temporal complexity as the indicator of criticality.

The power law distribution of avalanche size
As pointed out in section 1, our theoretical model explains both the weak sensitivity of the brain to harmonic stimuli [1] and the results regarding the synchronization between the brains of the two rats of [3], but it does not lead to a satisfactory agreement with the adoption of ∝ p s s ( ) 1 1.5 as criticality indicator. In this section with the help of figure 13 we show this important fact. We monitor the number of neurons firing at the same time, s, and we evaluate the corresponding distribution density p s ( ). We find that the power law1.5 emerges at about = − k 5 · 10 7 , which is a value significantly larger than 3.3 · 10 7 signaling the emergence of temporal complexity. The adoption of temporal complexity, yielding = − k 3.3 · 10 C 7 , produces very good agreement with the maximal correlation between the driven and driving complex networks of figure 12, whereas the adoption of the power law size of avalanches would correspond to the k region on the right of the maximum of the bell shaped curve of figure 12, very far away from the condition of maximal transport of information. This result confirms the earlier observation of [20] and must be kept in mind to establish the true nature of temporal complexity that is clearly well distinct from the criticality signaled by the inverse power law of the avalanche size distribution.
It is important to notice that the results of this section may not quite conflict with the literature in this field of research. The readers can consult the excellent volume of [15] to assess the important fact that a large variety of neural models yield avalanches with a power law size distribution with power index1.5. However, to the best of our knowledge no significant attention has been devoted to temporal complexity. It is plausible to conjecture that many models of cooperating neurons may lead to the temporal complexity of this article. Missing the observation of temporal complexity may lead to the conclusion that criticality corresponds to the emergence of 1.99 · 10 (brown), 3.3 · 10 (red) 7 7 , − − 4.99 · 10 (yellow), 6.99 · 10 (dark red) 7 7 , − 1.7 · 10 (blue with blue dashes) 6 , the black line is for reference and has slope −1.5.
avalanches with the inverse power index1.5, which in figure 13 is shown to occur at = − k 4.99 · 10 6 , i.e., a value of the control parameter k that is1.5 times larger than the value of the control parameter = − k 3.3 · 10 C 7 that generates temporal complexity.

Inhibition-induced transition to criticality
This section is devoted to discuss the influence of inhibitory links, namely, of L ij in equation (4) with negative values, on the emergence of criticality. We repeat the calculations done in the earlier sections and we recover the same results with only one significant change: the critical value k C is shifted to higher values.
Let us discuss the effect of inhibition moving from the results of figure 7. Figure 14 shows that10% inhibitory links have the effect of moving k C from about − 3.5 · 10 6 to about − 4.2 · 10 6 . This corresponds to shifting to the right the first-order-like phase transition of figure 8, as shown in figure 15.
The most relevant effect of inhibitory links is illustrated in figure 16. This has to do with the experiment of information transfer from the network B to the network A. We see that the result is identical to that of figure 12 with the bell-shaped curve shifted from the left to the right so as to make the maximum of the cross-correlation function coincide with the new value of the critical strength k C .
An equivalent, but more interesting, way of interpreting this result is as follows. Let us imagine that both complex networks are given a cooperation strength ≈ − k 4.3 · 10 7 . The transfer of information from B to A is  small. As an effect of increasing the number of inhibitory links in both systems, the critical value of10% corresponds to the maximum of the bell-shaped curve illustrating the cross-correlation C, thereby ensuring maximal information transport. We expect, in fact, on the basis of the earlier work of [20,31] a close correspondence between the cross-correlation C and the mutual information, and consequently with the transport of information from B to A.
In other words, increasing c I , the concentration of inhibitory links, has the effect of shifting the bell-shaped curve describing the cross-correlation C from left to right. If we focus our attention on a value of > k k C corresponding to a small correlation with no inhibitory links, and we gradually increase c I , we see that C increases, reaches a maximal value and then decreases as the shifting bell-shaped curve C keeps moving to the right. In conclusion, we get a resonant-like curve with the maximum at = c 10% I if we select, in the absence of inhibitory links, ≈ − k 4.3 · 10 7 .

Response of the complex system to harmonic stimuli
We are now in-position to make a numerical simulation corresponding to the recent experimental result of [1]. We replace the units of the network B with identical harmonic oscillators and require that the mean square value of x t ( ) of the oscillators playing the role of the perturbing neurons be identical to the mean square value of the perturbing neurons in the experiment illustrated by figure 12. The result of this experiment is illustrated by figure 17.
We see that the cross-correlation C undergoes the maximal positive change, while remaining slightly negative, when the perturbed network is at criticality. We believe that this is expected and is a manifestation of the important property that neural networks at criticality are more sensitive to external stimuli. However, the height of the bell-shaped curve is a qualitative measurement of the amount of information transfer, and figure 17, thus, can be used as a proof that to go beyond the limited response to harmonic stimuli discussed by the recent work of [1] it is necessary to perturb a neural network with a stimulus produced by another neural network with the same complexity. This is the complexity matching phenomenon emphasized by the earlier work of our group [4][5][6]11].

Concluding remarks
In this section we discuss problems that are the current objects of investigation by researchers in the field of complexity, what this article contributes to settle these problems and what is still left open.

Shedding light into criticality
The hypothesis that the brain is a complex system at criticality is shared by a large number of investigators [13,14,17,32]. However, the exact meaning of criticality is not yet clearly established. Werner [32] refers himself to phase transitions in physics, as illustrated by renormalization group theory, thereby implying that the brain lives in a condition between local and long-range correlation. On the other hand, the attractive theoretical perspective of SOC [33][34][35][36] is directly or indirectly adopted by many authors, thereby raising the question of its connection with traditional processes of phase transition.
Of remarkable interest for this article is the work of [36], referring to the dynamics of sleep-wake transitions during sleep, which has been the subject of the earlier work of [37]. These authors found that sleeping actually consists of a sequel of wake and sleep states, and that the time duration of the wake states has an inverse power law with index μ = 2.1 very close to the value μ = 2, which would generate the condition of ideal f 1 noise [2], while the time duration of the sleep state is described by exponential distribution density. In the latest work of [36] they emphasize the connection between their result and SOC, in accordance with theoretical arguments already adopted in the earlier work of [38]. It is remarkable that the authors of [39,40] found similar results for the distribution of the number of rapid transition events occurring in different regions of the brain, namely scale-free in the wake and exponential in the sleep state. These properties may be connected, especially if we keep in mind the emergence of f 1 noise from the process of 'stochastic feedback' studied in the earlier work of [41]. The authors of [19] have addressed the issue of a possible connection of criticality in the brain with the concept of extended criticality [42], according to which the singularity of ordinary phase transition processes in physics is replaced by a wide set of control parameter values. For each of these values the complex system is characterized by different forms of organization, sharing the common property of departing from the usual condition of uncorrelated thermodynamical equilibrium. These concepts are illustrated by these authors in their recent book of [43]. The results of the research work of [20] also suggest the possibility that extended forms of phase transition are a manifestation of the fact that in biology the number of interacting units is finite, this being a still poorly understood condition. As a consequence of the lack of a widely accepted theory of criticality, in this paper we adopted the empirical procedure of defining criticality as the value of the control parameter corresponding to the optimal condition for the transfer of information from one complex system to another (i.e., the maximum of the cross-correlation). The adoption of this criterion leads us to recover the main result of the earlier work [20] that the maximal correlation between driving and driven system is realized when both networks are characterized by temporal complexity, namely, when the ML survival probability signals the transition from the Poisson-like regime to a regime showing the first signs of periodicity. As a consequence, we are led to reiterate our conviction that the proper signature of criticality is temporal complexity. Again, as in the case of the earlier work [20] the inverse power law distribution of avalanche intensities is realized at higher values of k, as shown by figure 13. This result leads us to reiterate our conviction that the scientific foundation of criticality in the brain is still open.

New results
We study an integrate-and-fire model of the same type as that adopted in earlier work of our group [19,20], with a different form of randomness. The cooperation strength has the role of counterbalancing the drift towards ordinary statistics due to disorder. In this model disorder is generated by the random choice of initial conditions Figure 17. The cross-correlation between a complex network and the complex network driving it (red dots), and between a complex network and the harmonic network driving it (blue dots).
for each neuron after firing, whereas in the earlier work randomness was due to the steady action of a stochastic force on each neuron.
In addition to recovering with the new model the results of the earlier work, thereby proving their generality, in this article we also obtain new results. In figure 11 we show that the transition from the exponential to the ML regime corresponds to a significant change in the mean number of firings per unit time. The number of firings per unit of time grows linearly with k from the minimal value g 0 to a maximal value occurring at = − k 3.3 · 10 C 7 . This linear increase is due to the special nature of the Izikevitch-like model, adopted in this paper, which makes the time distance between two consecutive firings of the same neuron become shorter with time. In the earlier work of [20] this rate was expected to remain constant because after firing each neurons jumps back to x = 0, while here each neuron after firing moves to a random value > x 0, with a distribution density that becomes equivalent to that of the earlier model only in limiting case of very large values of β in equation (18). In the previous work the observation of this constant rate was made difficult by the numerical inaccuracy of the corresponding algorithm. The use of a stochastic force to generate the randomness of a single neuron was incompatible with the analytical formula of equation (19) which allowed the numerical results of this paper to be more accurate, as shown by figure 11. The price to pay for the numerical accuracy of figure 11 is, however, that in the region < k k C the rate of firings increases linearly with k rather than remaining constant, as it should when the cooperation k is not yet strong enough so as to make the neurons depart from the condition of total independence of the ones from the others.
For > k k C the dependence of the mean number of firings per unit of time changes as an effect of the transition to the supercritical regime. This supercritical condition for k large enough yields avalanches with ∝ p s s ( ) 1 1.5 . At the same time, as emphasized in the earlier work of [20], the renewal nature of temporal complexity is lost due to the effects of a finite maximum firing time (which truncates the waiting-time distribution). Time periodicity becomes evident for about > − k 4 · 10 4 , as shown in figure 4. Finally, this article shows that the modest efficiency of a complex system in the supercritical regime can be improved by increasing the number of inhibitory links, confirming observations recently made with other models and theoretical arguments [16]. It is important to remark that the increase of inhibitory links is a form of homeostasis control that in the work of [41] is found to have the effect of generating f 1 spectrum. From within our theoretical perspective this form of control generates the ML function that, as discussed in section 4.2, is close to ideal f 1 noise. We think therefore that the results of this article may contribute to understand the important issue of the regulation of biological rhythms.