Next Article in Journal
MultiNet-GS: Structured Road Perception Model Based on Multi-Task Convolutional Neural Network
Next Article in Special Issue
Graph U-Shaped Network with Mapping-Aware Local Enhancement for Single-Frame 3D Human Pose Estimation
Previous Article in Journal
Improved Attention Mechanism for Human-like Intelligent Vehicle Trajectory Prediction
Previous Article in Special Issue
DFFA-Net: A Differential Convolutional Neural Network for Underwater Optical Image Dehazing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Activeness: A Novel Neural Coding Scheme Integrating the Spike Rate and Temporal Information in the Spiking Neural Network

1
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2
Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing 100124, China
3
Engineering Research Center of Digital Community, Ministry of Education, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(19), 3992; https://doi.org/10.3390/electronics12193992
Submission received: 4 August 2023 / Revised: 11 September 2023 / Accepted: 20 September 2023 / Published: 22 September 2023

Abstract

:
In neuromorphic computing, the coding method of spiking neurons serves as the foundation and is crucial for various aspects of network operation. Existing mainstream coding methods, such as rate coding and temporal coding, have different focuses, and each has its own advantages and limitations. This paper proposes a novel coding scheme called activeness coding that integrates the strengths of both rate and temporal coding methods. It encompasses precise timing information of the most recent neuronal spike as well as the historical firing rate information. The results of basic characteristic tests demonstrate that this encoding method accurately expresses input information and exhibits robustness. Furthermore, an unsupervised learning method based on activeness-coding triplet spike-timing dependent plasticity (STDP) is introduced, with the MNIST classification task used as an example to assess the performance of this encoding method in solving cognitive tasks. Test results show an improvement in accuracy of approximately 4.5%. Additionally, activeness coding also exhibits potential advantages in terms of resource conservation. Overall, activeness offers a promising approach for spiking neural network encoding with implications for various applications in the field of neural computation.

1. Introduction

In the field of neuromorphic computing, spiking neural networks have garnered significant attention as a biologically inspired computational model and are regarded as the fourth generation of neural networks [1]. This model is based on the spiking behavior of biomimetic neurons, allowing for the processing of spatiotemporal information while efficiently conserving energy. The representation of information by spiking neurons, known as neural coding [2], is closely associated with the input encoding, information transmission, learning algorithms, and output readout of spiking neural networks.
Currently, one of the predominant approaches for encoding the activity of spiking neurons in spiking neural networks is rate coding [3]. This method is simple and robust, as it considers the firing rate of neurons. However, rate coding suffers from slow information transmission and lower processing efficiency due to the need for statistical calculations over longer time windows. Additionally, rate coding only considers the firing rate of neurons, disregarding the crucial impact of precise spike timing on network activity, which contradicts relevant physiological findings [4].
Another widely adopted encoding approach is temporal coding, which utilizes precise spike timing to represent information. Examples include time-to-first-spike (TTFS) coding [5] and relative spike latency (RST) coding [6]. These methods can speed information transmission and offer energy-efficient single-spike requirements. However, they exhibit poorer robustness against interference, and the sparse firing rates make it challenging to directly extend such networks to multiple layers. Moreover, other encoding methods, such as phase coding [7] and burst coding [8], have been proposed to seek simple, efficient, and robust encoding strategies for spiking neurons.
This paper proposes a novel spike-based encoding method called activeness, which integrates the historical activity information of spiking neurons. Activeness quantifies the discharge of spiking neurons using a single scalar value, effectively integrating the historical firing rate and precise spike timing information of the neurons while maintaining a limited resource cost. This encoding method can be directly applied to input encoding, learning algorithms, and output decoding in spiking neural networks. It enables resource savings, complexity reduction, and improved learning efficiency in network operations.
In the following sections, we begin by providing an exposition and analysis of the definition and computation method of activeness. Subsequently, we explore the representation capability and robustness of Activeness through experimental investigations, as well as its performance in classical classification tasks. Finally, we summarize and discuss the potential applications of activeness.

2. Proposed Methods

In the biological brain, neurons process and transmit information through seemingly instantaneous, stochastic, and disordered firing activities. Understanding the mechanisms of neuronal firing in the biological brain is of great significance for cognitive neuroscience and deciphering brain function. In spiking neural networks inspired by the brain, one of the primary tasks is to represent information at the level of neurons, which is known as neural coding.
A good neural coding scheme should accurately represent the input information of neurons (accuracy), yield consistent coding outcomes for different signal frequencies or intensities (robustness), maximize the inclusion of information to enhance coding efficiency (efficiency), and provide an explanation for the encoding mechanisms that align with the knowledge of neuroscience (interpretability).
Neuroscientific research has revealed that neuronal firing is a complex series of electrochemical reactions triggered by excitation. Sustained neuronal firing leads to the accumulation of calcium ions within the cell [9,10], and calcium ions participate in various processes of signal transmission and activity modulation [11]. Neuronal firing activity also continuously regulates the synthesis and degradation of proteins within the cell [12], and these proteins are involved in nearly all aspects of neuronal activity [13]. The changes in calcium ions and proteins within the neuron have a significant impact on synaptic plasticity and neuronal function. However, these changes are not instantaneous. They occur gradually through the accumulation or degradation process, which is the result of long-term neuronal activity. This is different from the rapid processes occurring in synapses, such as the release of neurotransmitters controlled by the activity states of pre and postsynaptic neurons, and the switching of ion channels. These rapid processes are the transient activities of neurons. Nevertheless, there is a close relationship between these two processes. The transient activities cause changes in regulatory substances (such as continuous stimulation leading to the synthesis of transcription factors and new proteins, which may result in the formation of new synapses [14]), and these changes in substances then affect the intensity of each transient activity (such as the concentration of calcium ions affecting the movement of synaptic vesicles and the release of neurotransmitters [15], which directly influences the action of postsynaptic neurons).
The previous encoding schemes, including rate coding, temporal coding, phase coding, and sequence coding, mainly reflect the transient activity of neurons and lack emphasis on the historical context. To achieve an encoding scheme that captures both transient changes and historical activity, this paper proposes a novel neural encoding scheme called activeness coding. The specific definition is as follows.
Activeness is a metric used to quantify the activity level of a neuron. When an action potential arrives, the activeness, denoted as A, increases by R and subsequently decays to zero with a time constant τ A . The step size of activeness, R, is determined by the time interval since the last discharge. When a neuron fires, R is updated to 1 and then decays to zero with a time constant τ R . The computation method is
d A ( t ) d t = A ( t ) τ A , d R ( t ) d t = R ( t ) τ R .
A ( t ) = A ( t ) + R ( t ϵ ) , R ( t ) = 1 , t = t f .
where t f represents the moment of neuronal firing. The small positive constant ϵ in Equation (2) is used to ensure that the activeness A of the neuron is updated before R is set to 1. This guarantees that the increase in A is correlated with the time interval between neuronal firings. In other words, as the time interval between two firings becomes shorter, R approaches 1, whereas for longer intervals, R approaches 0. The relationships among the membrane potential V, the step size R, and the activity level A of the neuron are illustrated in Figure 1.
Activeness is a scalar value that is dimensionless. It is not only influenced by the most recent discharge of the neuron but also takes into account the impact of all previous discharges. The step size R reflects the precise timing of the most recent spike. When the input remains constant, the cumulative sum of step sizes R can reflect the average number of spike occurrences within a certain time period, which is consistent with the spike rate. Therefore, we believe that activeness encompasses both spike rate information and precise spike timing information. Detailed experimental validation and analysis are presented in Section 3.
In the activeness formula (see Equation (1)), τ R determines the influence of the most recent spike on the activeness, while τ A determines the influence of the historical spike activity on the activeness. Typically, τ R is not greater than τ A , as otherwise, the activeness encoding results would exhibit significant fluctuations.
It is worth noting that the activeness calculation model proposed in this paper exhibits similarities with the output voltage waveform of an LIF (leaky integrate-and-fire) neuron’s RC integrator in some intervals. However, their fundamental natures are entirely different and they are not interchangeable. First and foremost, they address different problems. LIF neurons (with synaptic models included) deal with the dynamic relationship between input spikes and the membrane potential of neurons. In contrast, activeness coding quantifies the degree of neural activity. Furthermore, they correspond to different neurophysiological processes. LIF neurons primarily simulate changes in the neuronal membrane potential, whereas, as mentioned earlier, activeness coding abstracts the processes related to internal substances like calcium ions and proteins within neurons. Moreover, they operate on different time scales. The membrane time constant of LIF neurons typically falls in the range of tens of milliseconds, while activeness coding integrates information over longer time scales, typically in the hundreds of milliseconds, to more accurately represent the activity characteristics of neurons. Finally, there are differences in computational details. Although both LIF neurons and activeness involve nonlinear integrators, unlike the fixed time constant of LIF’s RC circuit, the integration time constant in activeness is not constant but varies depending on the input spike pattern. Activeness coding also does not require a comparator similar to the membrane potential threshold used in LIF neurons.

3. Experiments and Evaluation

3.1. Basic Characteristics

As described in the previous section, activeness coding possesses a certain degree of biological interpretability. The calculation method of activeness allows it to naturally incorporate both the temporal information of the nearest spike and the activity information from all previous spikes, thus achieving high encoding efficiency. However, the accuracy and robustness of its encoding need to be examined through experiments.
To visualize the basic performance of activeness coding more directly, we conducted tests using the encoding of input neuron activity as an example. The data processing procedure for the testing system is illustrated in Figure 2.
In Figure 2, the input to the neuron, denoted as D in , is a real number, such as the pixel value of a gray-scale image in image classification tasks. Firstly, based on the specified peak firing rate f P , we calculate the firing rate of the neuron for the current input, which determines whether the membrane potential V at the current moment is at the resting state or action potential state. This enables the conversion of real-valued input into spiking output. Subsequently, we calculate the activeness A using Equations (1) and (2). Finally, we visualize the recorded membrane potential V and activeness A over the testing period. The test results are shown in Figure 3 and Figure 4.
Accuracy. When the input real numbers D in are set to 63, 127, 191, and 255, with f P = 63.75 Hz, τ R = τ A = 100 ms, the neuron is configured to discharge at an ideal constant rate. The membrane potential variations of the input neuron within 1000 ms are shown in the four waveform segments in Figure 3a. Meanwhile, the activeness gradually increases nonlinearly from an initial value of 0 over time, and after approximately 300 ms, it stabilizes. The growth rate of activeness is positively correlated with the firing rate of the neuron, i.e., with the input real values, as depicted by the four curves in Figure 3b, from bottom to top representing the four increasing input values. On a microscopic level, influenced by the stimulatory pulses, the activeness dynamically fluctuates between step increments and decays, with the amplitude of the fluctuations related to the time constants τ R and τ A . However, the overall trend of the activeness corresponds to the input values. In other words, the activeness can accurately encode the input information. Even during the rising period of activeness (0–300 ms), the relative magnitudes of activeness reflect the relative sizes of the inputs. Although activeness is not as precise as temporal coding, it overcomes the dependency of rate coding on large time windows, thereby improving the information transmission rate.
Robustness. The actual firing patterns of biological neurons differ from the aforementioned ideal state and are characterized by inherent randomness. In neural computational models, the firing process is often treated as a series of stochastic events related to the input, which are commonly assumed to follow a Poisson distribution. Therefore, in this experiment, Poisson random numbers were used to generate firing pulses. When the inputs D in are set to 63, 127, 191, and 255, with f P = 63.75 Hz, τ R = τ A = 100 ms, the membrane potential variations of the neuron after Poisson coding are shown in Figure 4a. It can be observed that the generated discrete pulses have varying time intervals between them. Consequently, the activeness also exhibits significant fluctuations. By averaging the results of 10 experiments and calculating the standard deviation, the curve representing the changes in activeness is obtained, as depicted in Figure 4b. The solid lines in the figure represent the mean activeness curves for different inputs, while the translucent areas of the same color surrounding the lines indicate the range of data fluctuations. From the figure, it can be seen that the activeness exhibits a substantial amount of random fluctuations, but its mean values still demonstrate a strong correlation with the inputs. The four curves are clearly distinct, without overlap or intertwining, indicating the robustness of the activeness coding.

3.2. Classification Performance

To evaluate the performance of the spiking neural networks utilizing the activeness coding in cognitive tasks, we used handwritten digit recognition as an example. Introducing activeness coding into the three-layer spiking neural network established by Diehl and Cook [16], we investigated the changes in network performance. The architecture outlined by Diehl and Cook [16] encompasses an input layer, where input pixel values are encoded using Poisson-distributed spike trains. This is followed by an excitatory neuron layer, fully connected to the input layer, utilizing leaky integrate-and-fire (LIF) neurons along with conductance-based synaptic models. Subsequently, there is an inhibitory neuron layer implementing lateral inhibition connections to establish competition among excitatory neurons. The synaptic connection weights between the input layer and the excitatory neuron layer are adjusted using unsupervised exponential-weight-dependence spike-timing-dependent plasticity (STDP), coupled with an adaptive membrane potential threshold regulation mechanism to achieve network homeostasis. They implemented this network using the Python-based spiking neural network simulator Brian [17] and trained the network with the Modified National Institute of Standards and Technology (MNIST) [18] handwritten digit dataset, eventually achieving the best unsupervised SNN test accuracy on MNIST at that time.
Based on the network architecture proposed by Diehl and Cook, the structure of the network developed for this test is illustrated in Figure 5.
The MNIST dataset, used as the input for the network, consists of 60,000 hand-written digit images with a resolution of 28 × 28 , as well as 10,000 test images with the same resolution. The pixel values range from 0 to 255. Each input neuron receives the input from one pixel and generates output spikes using Poisson random numbers, which are then encoded using the activeness-based encoding scheme. Therefore, there are a total of 784 input neurons, encoding the 784 pixels of the input. The input layer neurons are excitatory and fully connected to 400 excitatory neurons. Each excitatory neuron drives one inhibitory neuron, which in turn is recurrently connected to the other 399 excitatory neurons, excluding the excitatory neuron that excites it, implementing a competitive mechanism. The entire network was built and operated using the upgraded neural simulator Brian2 [19].
The recurrent connection weights between the excitatory and inhibitory neuron layers are fixed, while the excitatory synaptic connections between the input layer and the excitatory neuron layer are plastic. In contrast to the approach in reference [16], this paper introduced an improvement to triplet-STDP [20] by proposing an activeness-based triplet-STDP rule for synaptic weight modification. The implementation method involves replacing the synaptic trace in the triplet-STDP calculation formula with the activeness of the presynaptic neuron and replacing the postsynaptic trace with the activeness of the postsynaptic neuron. Consequently, when a presynaptic neuron fires at time t = t pre f , the weight modification is given by Δ ω = γ pre A post ( t pre f ) . Similarly, when a postsynaptic neuron fires at time t = t post f , the weight update is Δ ω = γ post A pre ( t post f ) A post ( t post f ) . By integrating the above formulas, the synaptic weight update formulas are obtained as shown in Equations (3) and (4).
ω ( t ) = ω ( t 1 ) + Δ ω ( t )
Δ ω ( t ) = γ pre A post ( t ) , t = t pre f , γ post A pre ( t ) A post ( t ) , t = t post f , 0 , others .
where γ pre represents the learning rate of the presynaptic neuron, and γ post represents the learning rate of the postsynaptic neuron. A pre ( t ) denotes the activeness of the presynaptic neuron at the time t, and A post ( t ) represents the activeness of the postsynaptic neuron at the time t.
Based on the above approach, the synaptic dynamics formula is defined in Brian2 as shown in Figure 6.
The code executed when the presynaptic neuron fires is shown in Figure 7.
The code executed when the postsynaptic neuron fires is shown in Figure 8.
The labeling assignment method during the model training phase and the classification result determination during the testing and inference phase are consistent with the method described in [16]. Therefore, a detailed description of these methods will not be reiterated here.
In this test, f P was still set to 63.75 Hz. The determination of other parameters involved multiple repeated experiments, searching for a set of parameter combinations that achieved a relatively good classification performance. These obtained parameter values are shown in Table 1. It is important to note that this set of values may not necessarily represent the optimal parameter combination.
After three rounds of training, the synaptic weight matrix of size 784 × 400 connecting the input layer to the excitatory neuron layer is reshaped into a matrix of size ( 28 × 20 ) × ( 28 × 20 ) . The distribution of synaptic weights obtained from this rearrangement is shown in Figure 9.
At this stage, a testing accuracy of 91.50% was achieved on the test dataset. The comparison between these test results and the results from [16] are shown in Table 2. The experimental findings clearly demonstrate that by solely replacing the synaptic trace in STDP with activeness, while maintaining all other conditions identical, the MNIST classification accuracy is improved by 4.5%. This improvement validates the superiority of activeness coding in the context of the MNIST task.
When the number of training samples varies, the curve depicting the change in test accuracy is shown in Figure 10.
As shown in Figure 10, the test accuracy steadily increases with the number of training iterations. It is observed that even after a single training epoch, the test accuracy achieved is comparable to that of the reference experiment conducted for three epochs. This finding further demonstrates the efficiency of activeness coding.

4. Discussion

4.1. Comparison

The activeness coding method is inspired by synaptic traces. Synaptic traces are a classic mathematical model used in spiking neural networks to characterize the impact of synaptic history on synaptic plasticity [21]. However, the two are fundamentally different, primarily in the following aspects:
Structural differences. Synaptic traces record the impact of neural activity on synapses and are typically defined as variables within the synapse. They include both pre-synaptic and post-synaptic traces. In contrast, activeness coding mainly considers the potential influence of a neuron’s own historical activity on itself and is only relevant to the neuron (soma). It is simpler and more straightforward.
Computational differences. Synaptic traces can be either all-to-all or only consider the nearest spiking event. Typically, constant values are accumulated in their calculations. On the other hand, the dynamic changes in the accumulation of activeness amplify the impact of precise discharge timing, thereby improving the efficiency of network operation.
Resource requirements. Learning based on synaptic traces requires maintaining two to three trace variables per synapse. In activeness encoding, only two relevant variables need to be maintained per neuron. Taking the classification performance test network in Section 3 as an example, using triplet STDP, the network would require a total of 784 × 400 × 3 = 940 , 800 trace variables. However, when using activeness coding, the network would only require ( 784 + 400 ) × 2 = 2368 relevant variables, which is approximately 1/400th of the previous approach. As the network size increases, the resource savings from this improvement become even more significant.

4.2. Potential Future Research

As a concise, macroscopic, efficient, and robust encoding method, activeness coding holds promise beyond input encoding and can be applied to the learning algorithm and network output inference.
Taking learning algorithms as an example, with the expressive power of activeness coding in a macroscopic and robust manner, it is possible to improve weight update strategies based on the precise discharge timing of pre- and post-synaptic neurons in unsupervised learning, supervised learning, and reward-based reinforcement learning algorithms. This improvement can be achieved by adopting rules based on activeness difference, leading to more stable network activity and faster learning convergence.
In terms of network output inference, many networks require additional classifiers or statistical computations to obtain human-understandable operational results. However, activity encoding provides a single quantized value that directly reflects the activity of output layer neurons, making the network’s operational results readily interpretable.
In addition to using spiking neural networks with activeness coding for image classification tasks, we are also exploring its potential in cross-modal information perception and integration. In the future, the proposed method could be further investigated to tackle various cognitive tasks.

5. Conclusions

This paper introduces a novel encoding method for spiking neurons, termed activeness encoding. This method is inspired by the cumulative changes in calcium ions and proteins within neurons that influence neural activity. Activeness combines precise timing information of the most recent spike with the entire history of spike events, demonstrating good encoding accuracy and robustness in basic characteristic tests. The results of image classification tasks show that activeness-based learning improves the testing accuracy on the MNIST dataset by approximately 4.5% compared to trace-based learning. Furthermore, activeness exhibits potential advantages in terms of computational power and storage capacity requirements.

Author Contributions

Conceptualization, Z.W. and N.Y.; data curation, Z.W.; formal analysis, Z.W. and N.Y.; funding acquisition, Z.W., N.Y. and Y.L.; investigation, Z.W. and Y.L.; methodology, Z.W.; project administration, N.Y.; software, Z.W.; validation, Z.W., N.Y. and Y.L.; writing—original draft, Z.W.; writing—review and editing, N.Y. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant no. 62076014) and the Beijing Municipal Natural Science Foundation (grant no. 4162012).

Data Availability Statement

The data and models that support the results of this study are included in the article. The code generated or used during the study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
STDPspike-timing-dependent plasticity
MNISTModified National Institute of Standards and Technology
TTFStime-to-first-spike
RSTrelative spike latency

References

  1. Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
  2. Guo, W.; Fouda, M.E.; Eltawil, A.; Salama, K. Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems. Front. Neurosci. 2021, 15, 638474. [Google Scholar] [CrossRef] [PubMed]
  3. Adrian, E.; Zotterman, Y. The impulses produced by sensory nerve-endings: Part II. The response of a single end-organ. J. Physiol. 1926, 61, 151–171. [Google Scholar] [CrossRef] [PubMed]
  4. Gerstner, W.; Kreiter, A.; Markram, H.; Herz, A. Neural codes: Firing rates and beyond. Proc. Natl. Acad. Sci. USA 1997, 94, 12740–12741. [Google Scholar] [CrossRef] [PubMed]
  5. Johansson, R.S.; Birznieks, I. First spikes in ensembles of human tactile afferents code complex spatial fingertip events. Nat. Neurosci. 2004, 7, 170. [Google Scholar] [CrossRef]
  6. Gollisch, T.; Meister, M. Rapid Neural Coding in the Retina with Relative Spike Latencies. Science 2008, 319, 1108–1111. [Google Scholar] [CrossRef]
  7. O’Keefe, J.; Recce, M. Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 1993, 3, 317–330. [Google Scholar] [CrossRef]
  8. Zeldenrust, F.; Wadman, W.; Englitz, B. Neural Coding With Bursts—Current State and Future Perspectives. Front. Comput. Neurosci. 2018, 12, 48. [Google Scholar] [CrossRef]
  9. Isomura, Y.; Kato, N. Action Potential–Induced Dendritic Calcium Dynamics Correlated With Synaptic Plasticity in Developing Hippocampal Pyramidal Cells. J. Neurophysiol. 1999, 82, 1993–1999. [Google Scholar] [CrossRef]
  10. Cepeda-Prado, E.; Khodaie, B.; Quiceno, G.; Beythien, S.; Lessmann, V.; Edelmann, E. Calcium-permeable AMPA receptors mediate timing-dependent LTP elicited by 6 coincident action potentials at Schaffer collateral-CA1 synapses. Cereb. Cortex 2021, 32, 1682–1703. [Google Scholar] [CrossRef]
  11. Inglebert, Y.; Debanne, D. Calcium and Spike Timing-Dependent Plasticity. Front. Cell. Neurosci. 2021, 15, 727336. [Google Scholar] [CrossRef] [PubMed]
  12. Sutton, M.A.; Schuman, E.M. Dendritic Protein Synthesis, Synaptic Plasticity, and Memory. Cell 2006, 127, 49–58. [Google Scholar] [CrossRef]
  13. Khan, R.; Kulasiri, D.; Samarasinghe, S. Functional repertoire of protein kinases and phosphatases in synaptic plasticity and associated neurological disorders. Neural Regen. Res. 2020, 16, 1150–1157. [Google Scholar] [CrossRef]
  14. Batool, S.; Raza, H.; Zaidi, J.; Riaz, S.; Hasan, S.; Syed, N.I. Synapse formation: From cellular and molecular mechanisms to neurodevelopmental and neurodegenerative disorders. J. Neurophysiol. 2019, 121, 1381–1397. [Google Scholar] [CrossRef]
  15. Wu, L.G.; Westenbroek, R.; Borst, J.; Catterall, W.; Sakmann, B. Calcium Channel Types with Distinct Presynaptic Localization Couple Differentially to Transmitter Release in Single Calyx-Type Synapses. J. Neurosci. Off. J. Soc. Neurosci. 1999, 19, 726–736. [Google Scholar] [CrossRef] [PubMed]
  16. Diehl, P.; Cook, M. Unsupervised Learning of Digit Recognition Using Spike-Timing-Dependent Plasticity. Front. Comput. Neurosci. 2015, 9, 99. [Google Scholar] [CrossRef] [PubMed]
  17. Goodman, D.; Brette, R. Brian: A simulator for spiking neural networks in Python. Front. Neuroinf. 2008, 2, 5. [Google Scholar] [CrossRef] [PubMed]
  18. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  19. Stimberg, M.; Brette, R.; Goodman, D.F.M. Brian 2: An Intuitive and Efficient neural Simulator. eLife 2019, 8, e47314. [Google Scholar] [CrossRef] [PubMed]
  20. Pfister, J.P.; Gerstner, W. Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity. J. Neurosci. 2006, 26, 9673–9682. [Google Scholar] [CrossRef] [PubMed]
  21. Gerstner, W.; Kistler, W.M. Spiking Neuron Models: Single Neurons, Populations, Plasticity; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar] [CrossRef]
Figure 1. Illustration of the definition of activeness. The first row shows the waveform of neural membrane potential V over time, where each pulse represents a single discharge of the neuron. The second row displays the variation of the step variable R, which is set to 1 upon neuron discharge and decays with a time constant τ R . The third row exhibits the waveform of activeness A, which is obtained by adding R upon neuron discharge and decays with a time constant τ A .
Figure 1. Illustration of the definition of activeness. The first row shows the waveform of neural membrane potential V over time, where each pulse represents a single discharge of the neuron. The second row displays the variation of the step variable R, which is set to 1 upon neuron discharge and decays with a time constant τ R . The third row exhibits the waveform of activeness A, which is obtained by adding R upon neuron discharge and decays with a time constant τ A .
Electronics 12 03992 g001
Figure 2. Data processing workflow for basic characteristics testing of activeness.
Figure 2. Data processing workflow for basic characteristics testing of activeness.
Electronics 12 03992 g002
Figure 3. Encoding results of activeness for the ideal neural activity. (a) The waveforms of neural membrane potential V over time t are shown when the inputs are 63, 127, 191, and 255, and the neurons fire spikes at constant rates. (b) The corresponding activeness-based encoding results are presented.
Figure 3. Encoding results of activeness for the ideal neural activity. (a) The waveforms of neural membrane potential V over time t are shown when the inputs are 63, 127, 191, and 255, and the neurons fire spikes at constant rates. (b) The corresponding activeness-based encoding results are presented.
Electronics 12 03992 g003
Figure 4. Encoding results of activeness for neurons with Poisson distributed activity. (a) The waveform of neural membrane potential V over time t is shown when the inputs are 63, 127, 191, and 255, and the neurons fire spikes with Poisson distribution. (b) The corresponding activeness-based encoding results are presented, with solid lines representing the mean of 10 experimental trials, and the semi-transparent regions of the same color indicating the range of data fluctuations.
Figure 4. Encoding results of activeness for neurons with Poisson distributed activity. (a) The waveform of neural membrane potential V over time t is shown when the inputs are 63, 127, 191, and 255, and the neurons fire spikes with Poisson distribution. (b) The corresponding activeness-based encoding results are presented, with solid lines representing the mean of 10 experimental trials, and the semi-transparent regions of the same color indicating the range of data fluctuations.
Electronics 12 03992 g004
Figure 5. The network architecture for testing classification performance. It consists of three layers. The first layer is the input layer, containing 784 neurons responsible for encoding the pixel values of input images on a one-to-one basis. The second layer is the excitatory neuron layer, comprising 400 excitatory neurons, fully connected to the input layer, and with synaptic weights updated using the activeness-based STDP rule. The third layer is the inhibitory neuron layer, comprising 400 inhibitory neurons, each receiving excitation from one neuron in the excitatory neuron layer and reciprocally connected to the other 399 excitatory neurons.
Figure 5. The network architecture for testing classification performance. It consists of three layers. The first layer is the input layer, containing 784 neurons responsible for encoding the pixel values of input images on a one-to-one basis. The second layer is the excitatory neuron layer, comprising 400 excitatory neurons, fully connected to the input layer, and with synaptic weights updated using the activeness-based STDP rule. The third layer is the inhibitory neuron layer, comprising 400 inhibitory neurons, each receiving excitation from one neuron in the excitatory neuron layer and reciprocally connected to the other 399 excitatory neurons.
Electronics 12 03992 g005
Figure 6. Synaptic dynamics simulation code.
Figure 6. Synaptic dynamics simulation code.
Electronics 12 03992 g006
Figure 7. The simulation code executed when presynaptic neuron fires.
Figure 7. The simulation code executed when presynaptic neuron fires.
Electronics 12 03992 g007
Figure 8. The simulation code executed when postsynaptic neuron fires.
Figure 8. The simulation code executed when postsynaptic neuron fires.
Electronics 12 03992 g008
Figure 9. Synaptic weight matrix after training completion. The excitatory neuron layer comprises 400 neurons arranged in a 20 × 20 grid. Each neuron is fully connected to 784 input layer neurons through synaptic connections. The synaptic weights, totaling 784, are rearranged into a 28 × 28 grid, resulting in the weight distribution. The white color represents the minimum weight value of 0, while the black color represents the maximum weight value of 1.
Figure 9. Synaptic weight matrix after training completion. The excitatory neuron layer comprises 400 neurons arranged in a 20 × 20 grid. Each neuron is fully connected to 784 input layer neurons through synaptic connections. The synaptic weights, totaling 784, are rearranged into a 28 × 28 grid, resulting in the weight distribution. The white color represents the minimum weight value of 0, while the black color represents the maximum weight value of 1.
Electronics 12 03992 g009
Figure 10. The test accuracy results for MNIST.
Figure 10. The test accuracy results for MNIST.
Electronics 12 03992 g010
Table 1. Parameters for the classification performance testing network.
Table 1. Parameters for the classification performance testing network.
ParametersValues
τ R 20 ms
τ A 100 ms
f P 63.75 Hz
γ pre 0.0001
γ post 0.01
Table 2. Comparison of MNIST classification performance test results.
Table 2. Comparison of MNIST classification performance test results.
WorksLearning RuleAccuracy
Diehl et al., 2015 [16]Spike-based triplet STDP87%
This workActiveness-based triplet STDP91.5%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Yu, N.; Liao, Y. Activeness: A Novel Neural Coding Scheme Integrating the Spike Rate and Temporal Information in the Spiking Neural Network. Electronics 2023, 12, 3992. https://doi.org/10.3390/electronics12193992

AMA Style

Wang Z, Yu N, Liao Y. Activeness: A Novel Neural Coding Scheme Integrating the Spike Rate and Temporal Information in the Spiking Neural Network. Electronics. 2023; 12(19):3992. https://doi.org/10.3390/electronics12193992

Chicago/Turabian Style

Wang, Zongxia, Naigong Yu, and Yishen Liao. 2023. "Activeness: A Novel Neural Coding Scheme Integrating the Spike Rate and Temporal Information in the Spiking Neural Network" Electronics 12, no. 19: 3992. https://doi.org/10.3390/electronics12193992

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop