Enhancement of signal sensitivity in a heterogeneous neural network refined from synaptic plasticity

Long-term synaptic plasticity induced by neural activity is of great importance in informing the formation of neural connectivity and the development of the nervous system. It is reasonable to consider self-organized neural networks instead of prior imposition of a specific topology. In this paper, we propose a novel network evolved from two stages of the learning process, which are respectively guided by two experimentally observed synaptic plasticity rules, i.e. the spike-timing-dependent plasticity (STDP) mechanism and the burst-timing-dependent plasticity (BTDP) mechanism. Due to the existence of heterogeneity in neurons that exhibit different degrees of excitability, a two-level hierarchical structure is obtained after the synaptic refinement. This self-organized network shows higher sensitivity to afferent current injection compared with alternative archetypal networks with different neural connectivity. Statistical analysis also demonstrates that it has the small-world properties of small shortest path length and high clustering coefficients. Thus the selectively refined connectivity enhances the ability of neuronal communications and improves the efficiency of signal transmission in the network.


Introduction
Recently, the structure-function relations in brain networks have been broadly investigated. It is clear that network structure determines the dynamics of the network. Meanwhile, in most real-world networks, the evolution of structure is essentially affected by the dynamical state of the network. Thus, coevolutionary or adaptive networks, which possess complicated mutual interaction between the time varying network topology and the nodes' dynamics, deserve more attention in network research [1,2]. Many recent studies of neural networks have been focused on discussing the network dynamics under a predefined topological structure. Since neuronal connectivity in the brain is always flexible and perpetually changing, it is more natural to consider neural networks with self-organized synaptic connectivity.
It is widely believed that long-term synaptic plasticity plays a critical role in the learning, memory and development of the nervous system [3]- [5]. A well-known form of synaptic refinement is spike-timing-dependent plasticity (STDP), which shows an asymmetric time window for cooperation and competition among developing retinotectal synapses [6]. It is a spike-based plasticity where the synapse is potentiated if a presynaptic neuron fires shortly before a postsynaptic neuron; otherwise, the synapse is depressed. Actually, the STDP explores the possible casual relationship between each pair of pre-and post-synaptic neurons. It has been broadly found in many neocortical layers and brain regions [6]- [8]. A recent study reports that bidirectional and unidirectional connections developed from their created STDP learning can reflect different neural codes [9]. In [10], a feed-forward structure with the emergence of a pacemaker, which is the neuron with the highest inherent firing frequency, was obtained by the organization of STDP. Different from this spike-based asymmetric learning rule, a burst-based symmetric plasticity, namely burst-timing dependent plasticity (BTDP), was measured in the developing lateral geniculate nucleus (LGN) by using information-theoretic approaches for the first time in [11,12]. The results of [11,12] suggest that information can be preserved by bursts of action potentials of the order of seconds rather than by single spikes within milliseconds. The order of burst timings is not important; only the relative or overlapping time of bursts is involved in controlling synaptic potentiation or depression. It is still an open question whether STDP or BTDP is more relevant in the formation of retinotopic maps [13]; computational results indicate that retinotopic refinement driven by retinal waves can be robustly achieved by using the BTDP rule [12,14]. In [15], simulation results show that the segregation of functionally distinct (ON and OFF) retinal ganglion cells (RGNs) in the LGN can be successfully guided by BTDP instead of STDP.

3
In our previous work [16], we studied how the heterogeneity of neurons influences the dynamical evolution and the emergent topology of the network guided by STDP learning. The heterogeneity, which triggers competition between individual neurons, plays an important role in phase synchronization [17] and pattern formation [18], and in shaping the activity of neural networks [19]. After learning, the dynamical information contained in the heterogeneous neurons was successfully encoded into the emergent active-neuron-dominant structure. This structure was shown to significantly enhance the coherence resonance and stochastic resonance of the entire network, which is of benefit for improving spiking coherence and detection of weak signals, respectively [20,21]. Based on this work, we use the above two activity-dependent synaptic plasticities for a two-stage evolution of the heterogeneous neural network. Firstly, considering that each neuron is more prone to be affected by the dynamic activity of its neighbors, we assume that synapses among neighboring neurons evolve via the STDP rule (local modification). From the obtained network, which has the active-neuron-dominant structure, some of the active neurons having strong outward links are selected as the leading neurons. Then, the long-range connectivity between the leaders is formed and refined following the BTDP rule (global modification). This step is based on the evidence that synaptic modification via BTDP occurs between neural populations in different areas with second-long burst activity during the retinotopic map refinement [12,14,15]. Because after the STDP update the synchronized populations are dominated by the leading neurons (also seen as the hub nodes), from which the excitation of a hierarchical network activity spreads [22], only synapses between the leading neurons are modified by BTDP. Here, the rate of afferent current injected into each leader is the relative variable instructing the synaptic update, where synapses between low-frequency-stimulated leaders are attenuated and the others are strengthened. This reorganization makes the leading neurons that are subject to more intensive neuronal activity easier to communicate with than the other leaders. Computational results show that this finally obtained two-level hierarchical network ensures that information is propagated efficiently through the whole network as follows: input → some leaders → other leaders →non-leaders. In addition, this network has the small-world property, e.g. small shortest path length (SPL) and large clustering coefficient [23]. There is strong evidence for the existence of small-world topology in the human brain of functional networks [24,25]. Recent studies have shown that the short path and high clustering coefficient of small-world networks can facilitate the pacemaker's influence on the whole network and thus favor the detection of weak stimuli [26]- [28]. Our results are highly consistent with these studies, indicating that such an architecture promotes efficient inter-regional communication in cortical circuits.

Neuron model and synaptic plasticity
In this paper, regular spiking neurons are modeled by the two-variable integrate-and-fire (IF) model of Izhikevich [29], which has been shown to be both biologically plausible and computationally efficient. It is described bẏ if 4 where i = 1, 2, . . . , N , v i represents the membrane potential and u i is a membrane recovery variable. The parameters a, b, c and d are dimensionless. The variable ξ i is the independent Gaussian noise with zero mean and intensity D that represents the noisy background. I stands for the externally applied current, and I syn i is the total synaptic current through neuron i and is governed by the dynamics of the synaptic variable s j , Here the synaptic recovery function α(v j ) can be taken as the Heaviside function. When the presynaptic cell is in the silent state v j < 0,ṡ j can be reduced toṡ j = −s j /τ , where τ is the decay time constant; otherwise, s j jumps quickly to 1 and acts on the postsynaptic cells. The synaptic conductance g ji from the j th neuron to the i th neuron will be updated through STDP or BTDP, which will be shown later. As only excitatory synapses are considered here, the synaptic reversal potential v syn is set to 0.
In this model, b describes the sensitivity of the recovery variable to the subthreshold fluctuations of membrane potential [29]. The critical value for Andronov-Hopf bifurcation is b 0 ≈ 0.2. For b < b 0 , the neuron is in the rest state and is excitable; for b > b 0 , the system has a stable periodic solution generating action potentials. Hence, the degree of the neuron's excitability is governed by this parameter. Neurons with larger b are prone to exhibit larger excitability and fire with a higher frequency than others (see figure 1, right). In order to establish a heterogeneous network, b i is uniformly distributed in [0.1, 0.26] in our networks. So, each neuron produces spiking trains with different firing rates when subjected to the same external input and noisy background.
In our simulation, synapses between neighboring neurons are updated by the STDP modification function F, which selectively strengthens the pre-to-post-synapses with relatively shorter latencies or stronger mutual correlations while weakening the remaining synapses [30] (figure 2, top). The synaptic conductance is updated by and t i and t j are the spike times of the presynaptic and postsynaptic cells, respectively. F( t) = 0 if t = 0. τ + and τ − determine the temporal window for synaptic modification. The parameters A + and A − determine the maximum amount of synaptic modification. Here, we set τ − = τ + = 20, A + = 0.05 and A − /A + = 1.05 as used in [30]. The peak synaptic conductance is restricted to the range [0, g max ], where g max = 0.04 is the limiting value.
Synaptic connections between neurons with strong outward links are rewired through BTDP, which depends on the overlap of pre-and postsynaptic bursts of the order of 1s [12]. That is, small latency between the beginning times of the pre-and postsynaptic bursts will result in large synaptic potentiation; otherwise the synapse will be depressed. Different from STDP, BTDP is symmetric, so that as long as the pre-and postsynaptic bursts are coincident,  the synapse is enhanced. Nonetheless, non-overlapping bursts produce synaptic depression (figure 2, bottom). The modification function is [12] where t = t j − t i (s), and t i and t j are the beginning times of bursts of the two connected cells, respectively. With this rule, synapses are maintained in the range of [0, G max ], where G max = 0.01. Other parameters used in this paper are a = 0.02, c = −65, d = 8, α 0 = 3, τ = 2, V shp = 5 and D = 0.1. The other parameters are given in each case. Numerical integrations of the system are done by the explicit Euler-Maruyama algorithm [31] with a time step of 0.05 ms.

Evolution of the network
A network of N = 1000 spatial-randomly distributed neurons with different degrees of excitability/activity is considered (figure 1). In this network, each neuron is bidirectionally connected to the spatially nearest 100 neighbors with the same conductance of g max /2 initially. The whole network is subjected to an external current (I = 3) as a learning environment. Then we suppose a two-stage learning process. Firstly, synapses between each neuron and its spatially nearest 100 neighboring neurons are updated through the STDP rule. After a long enough period, the connections evolve into a steady state and exhibit the locally active-neuron-dominant property, as we described in our previous work [16]. Figure 3(a) shows the histogram of the normalized synaptic weights among the 100 neighbors of a single neuron after the STDP update. Most of the synapses are rewired to be 0 or g max . Competition within this heterogeneous network causes the active cells to have high out-degree synapses and low in-degree synapses, while the inactive ones are just the opposite ( figure 3(b)). In this way, the internal dynamics of different neurons is encoded in the topology of the emergent network, and therefore the communication between active neurons and inactive neurons is improved.
In the second stage, about 200 neurons that possess more than 64 strong outward synapses (g > 0.9g max ) are selected to be the leading neurons. Initially, long-projecting connections between those non-neighboring leaders are formulated with the same conductance G max /2. Each leader is individually allocated with randomly occurring 1-s pulse injections, which simulate the spontaneous neural activity from other regions. The average frequency of the afferent pulses for each leader follows the uniform distribution of 0.1-0.9 Hz. Then the BTDP rule is applied to the synaptic modifications among the leaders. When the structure's rewiring finally ceases, most of the intermediate-valued synapses disappear and depolarization occurs (figure 3(c)). Figure 3  When the histogram and image of synaptic weights become unchanged, the network structure is assumed to reach a stationary state and the evolution ceases.
other connections are strengthened. This is because those subjected to high-rate stimulus are more prone to burst coincidentally with the others, hence their synapses are potentiated. Note that only the frequency of afferent stimulations, which affects the occurring chance of overlapped bursts, is responsible for the synaptic refinement, rather than the individual spikes within bursts.
After these two learning processes, a hierarchical network is obtained ( figure 4). Input applied to any of the leaders could be propagated through the whole network efficiently as follows: input → some leaders → other leaders → non-leaders.  BTDP learning is only exerted on the synapses among the leaders. The frequency of input pulses that induce burst determines the synaptic modification. Those applied with high-rate stimulus have more opportunity to burst at the overlapping time with others and thus their connections with others are potentiated, while the synapses between neurons with low-rate stimulus are depressed. After these two learning processes, a two-level hierarchical network is obtained. Input applied to any of the leaders could be propagated through the whole network efficiently.
neighboring connections and [0, G max ] for the connectivity between leading neurons. Note that all of these networks share the same adjacency matrix and average synaptic conductance. In order to examine the sensitivity of these networks in response to weak input, two cases are considered: only 20 selected leading neurons are injected with current input I = 2 (figure 5); all of the neurons are injected with current input I = 0.1 ( figure 6). The result indicates that spiking can be successfully transferred among all the members only in the +S + B network, which exhibits the most intensive responsiveness to the input. In the other networks, due to their inefficient synaptic connections, it is difficult to communicate effectively between the individuals and therefore synchronous activity fails. Also note that changes in the number of neighbors of each node and the initial synaptic structure and conductance value will influence not the formation of the final network topology but just the speed of the convergence process. Moreover, to ensure that our results do not depend on the specific realization of the uniform distribution of parameter b i among neurons, we have performed the learning process over several different realizations and find no significant changes to the final network topology. In the following section, we will discuss the statistical analysis of these networks to reveal how the topology affects the network activity.

Statistical analysis of the network
Complex networks are usually characterized by three statistical parameters: degree distribution, average SPL and clustering coefficient. Based on the definitions of these parameters for a weighted but undirected network in [32], we extend some of these to be applicable to both our weighted and directed networks. In this section, all synaptic weights (g i j ) are normalized in the interval of [0, 1]. The node i's out-degree k i(out) and in-degree k i(in) are defined, respectively, as the sum of the weights of all out-directed links and all in-directed links attached to node i, As the efficiency of communication between two nodes in neural networks is proportional to the weight, we use the definition of the SPL (d i j ) for communication networks described in [32], Thus, the SPL for out-directed links is L i(out) = (1/N ) N j=1 d i j . Because L i(out) is the main index to characterize the efficiency of signal propagation from leading neurons to the others, the in-directed SPL will not be discussed here.
In the literature there are several definitions of the weighted clustering coefficient [32]. Here we adopt Zhang and Horvath's [33] definition for gene coexpression networks and extend it to our directed neural networks, where C i(out) and C i(in) are the clustering coefficients for the outward and inward connections, respectively. The overall clustering coefficient is taken as the average of these two values. Figure 7 shows that the +S + B network has the broadest distribution of both the outdegrees and in-degrees. Increasing the proportion of strong outward/inward synapses and weak inward/outward synapses for the more/less excitable neurons can selectively refine the network connection and increase the network diversity in accordance with the individual heterogeneity of the neural population. Also, most of the neurons' clustering coefficients (especially for the leading neurons) of the +S + B network are higher than the those of other networks (figures 8(a) and (b)). The large enhancement of clustering coefficient for leading neurons in the networks with BTDP learning indicates that BTDP learning is responsible for such high clusterings, which is of benefit for communication among the leaders. Further, the shortest outward SPL of the leaders in the +S + B network (figures 8(c) and (d)) demonstrates the strong ability of this network for information transmission from the leading neurons to the others. In order to show this more clearly, we plot the average connectivity from the leaders to the 200 most inexcitative neurons in figure 8(e). The networks after STDP update exhibit a huge potentiation of these connections and therefore enhance the probability of generating a response to external input for the less excitable neurons. The average connections among the leading neurons are also examined ( figure 8(f)). This figure shows that BTDP update increases some of the leaders' synapses at the cost of weakening the synapses of the others. This refinement is good for reinforcing communications among the leaders by invoking some of them to spike first and then arousing the others. Therefore, our network organized from synaptic plasticity has the well-refined connectivity that shows the typical small-world properties, namely a small SPL and a high clustering coefficient [23].
All of the above-mentioned parameters characterize the global properties of networks. The distribution of network motifs is believed to be capable of exploring the local statistics of networks [34]- [36]. Here, the concentrations of three-node connected motifs/subgraphs in the updated and shuffled networks are examined in figure 9 by applying the motif detection tool from http://www.weizmann.ac.il/mcb/UriAlon/. The adjacent connections are obtained by setting the normalized synapses with g > 0.9 as 1 and the others as 0. After the first step of STDP learning and before establishing the long-distance connections between leaders, only the unidirectional motifs exist due to the asymmetric update ( figure 9(a)). This result is very well consistent with the network patterns observed in [10]. When the two updating processes  are completed, the 5th motif (figute 9(c)), which is caused by the adding of bidirectional links between leaders, is dominant in the final network structure ( figure 9(b)). This subgraph represents the connections between leaders and non-leaders. Therefore, the concentrations of motifs further reveal the specific internal and local structure of the updated network.

Discussion
In this paper, a novel neural network with heterogeneous neurons is obtained via two distinct learning rules. Synapses between neighboring neurons are modified by the STDP rule, so that the internal dynamics of neurons with different degrees of excitability are clearly extracted into an active-neuron-dominant topology. Further, the BTDP rule with long timescales refines the long-range connections among leading neurons by encoding the afferent pulse-injections with different occurring rates into the intensively stimulated-neuron-dominant structure. The final network we obtained exhibits a two-level hierarchical structure and has higher sensitivity to weak input than the other networks with different structures. This network also has the smallworld properties of a small SPL and a large clustering coefficient, indicating its high efficiency in signal processing. Leading neurons with the fastest inherent frequency in neural populations have also been discussed as a pacemaker in [10]. Also, the 'first-to-fire' cells that consistently fire earlier than others were observed in the spontaneous bursting activity of rat hippocampal and rat cortical neuron cultures [37,38]. These results indicate that the leading neurons play an important role in the development and propagation of collective behavior and act as the hub for synchronous excitation in response to an external stimulus. In this study, how the intrinsic excitabilities of individual neurons are involved in the refinement of synaptic connections is investigated. The most excitable neurons naturally evolve as the leading neurons that have a large amount of outward connectivity through competition with others. This activity-dependent redistribution of connections greatly improves the network's sensibility to a weak signal, while the other networks with constant or random connections fail. Our result is highly consistent with the observations that fast random rewiring or strong connectivity impair weak signal detection [39].
Besides the hub nodes acted by the leading neurons in this paper, the small SPL and high clustering coefficient of the self-organized neural network also contribute to the enhanced ability of weak signal detection. This special architecture exhibits a two-level hierarchical property, which is believed to ubiquitously exist in the cerebral cortex of mammalian brains [22,40,41]. The interplay between the hierarchical structure and the temporal dynamics of neural networks has attracted much attention in recent years [41]- [43]. In this paper, we explore the possible relationship between the learning rules with different timescales (STDP: millisecond-long; BTDP: second-long) and the formation of architecture with different special scales. Both the STDP and BTDP rules are long-term 'Hebbian' plasticity including long-term potentiation (LTP) and long-term depression (LTD). The asymmetric STDP promotes causal connections between pre-and postsynaptic neurons and eliminates the others. Hence, the emergence of clusters led by the most excitable neuron can be observed in self-organized neural networks, especially in heterogeneous networks; while BTDP serves as coincidence detection for the timing of bursts over the second-long timescale, no matter whether pre-or post-synaptic bursts occur first. Only synapses relative to the coincident neural activity can be further strengthened. However, the assumptions for guiding the two-step implementation of these rules still need further neurophysiological investigation, both experimentally and theoretically. The presented results might provide insight into the different roles and functions of both STDP and BTDP synaptic plasticity in the formation of neural circuits.