Real-time implementation of ReSuMe learning in Spiking Neural Network

Neuromorphic systems are designed by mimicking or being inspired by the nervous system, which realizes robust, autonomous, and power-efficient information processing by highly parallel architecture. Supervised learning was proposed as a successful concept of information processing in neural network. Recently, there has been an increasing body of evidence that instruction-based learning is also exploited by the brain. ReSuMe is a proposed algorithm by Ponulak and Kasinski in 2010. It proposes a supervised learning for biologically plausible neurons that reproduce template signals (instructions) or patterns encoded in precisely timed sequences of spikes. Here, we present a real-time ReSuMe learning implementation on FPGA using Leaky Integrate-and-fire (LIF) Spiking Neural Network (SNN). FPGA allows real-time implementation and embedded system. We show that this implementation can make successful the learning on a specific pattern


Introduction
Neuromorphic systems are designed by mimicking or being inspired by the nervous system, which realizes * Typeset names in 10 pt Times Roman, uppercase. Use the footnote to indicate the present or permanent address of the author. robust, autonomous, and power-efficient information processing by highly parallel architecture. There are three common methods to realize the neuromorphic circuits, which are software 1 2 3 , analog hardware 4 5 6 7 and digital hardware 8 9 10 11 12 . Software can implement simple neuron model but a large scale neural network with complex neuron model cannot be realized in real-time. The power consumption is also quite important (kW for supercomputer).
For hardware implementation, compared to analog circuits, digital implementations consume more power but they are convenient to modify, more portable and lower cost for implementation with FPGA devices. Supervised learning was proposed as a successful concept of information processing in neural network 13 . Recently, there has been an increasing body of evidence that instruction-based learning is also exploited by the brain. Remote Supervised Method (ReSuMe) is a new supervised learning method for Spiking Neural Networks. The main reason for the study of ReSuMe is the need to invent an effective learning method to control the movement of people with physical disabilities. However, the in-depth analysis of ReSuMe method shows that this method is not only suitable for motion control tasks, but also suitable for other practical applications, including modeling, identification and control of various nonstationary and non-linear objects 14 15 . In this paper, we present a real-time ReSuMe learning implementation on FPGA using Leaky Integrate-and-fire (LIF) Spiking Neural Network (SNN). FPGA allows real-time implementation and embedded system 16 . We show that this implementation can make successful the learning on a specific pattern.

Method
This section proposed three methods that applied to the ReSuMe learning implementation on FPGA, which are LIF-neuron model, Postsynaptic potential (PSP) and Spike response model (SRM), as well as ReSuMe algorithm.

LIF neuron model
The LIF neuron is one of the simplest spiking neuron models. Due to the convenience with which it can be analyzed, simulated especially implemented in digital silicon neural network, the LIF neuron is very popular 17 . A neuron is modeled as a "leaky integrator" of its input I(t): where v(t) represents the membrane potential at time t, is the membrane time constant and R is the membrane resistance. This equation describes a simple resistorcapacitor (RC) circuit where the leakage term is due to the resistor and the integration of I(t) is due to the capacitor that is in parallel to the resistor. The spiking events are not explicitly modeled in the LIF model. Instead, when the membrane potential v(t) reaches a certain threshold ℎ (spiking threshold), it is instantaneously reset to a lower value (reset potential) and the leaky integration process described by Eq. (1) starts a new with the initial value . Consider the case of constant input: I(t) = I. We assume = 0. The solution of Eq. (1) is then given by: Here v(t) is in an exponential decay. In discrete digital sequential circuit, a linear decay method is usually used to optimize computing process for saving hardware resources.
Eq. (3) describes the computing equation of dv, then solution v = v + dv obviously.

Postsynaptic Potential and Spike Response Model
By considering a single postsynaptic neuron i with a membrane potential at time , a simplified SRM is defined 17 .
This SRM signifies a dependence of the neuron's membrane potential on its presynaptic input pattern Χ from synapses. An output spike occurs at a time .
The term of Eq. (4) describes a weighted summation of the pre-synaptic input: the corresponds to the synaptic weight from a presynaptic neuron j, the kernel refers to the shape of an evoked PSP. The PSP kernel evolves according to The term Θ( )is the Heaviside step function defined such that Θ( ) = 1 for ≥ 0 and Θ( ) = 0. Here we approximate the postsynaptic current's time course by an exponential decay 18 .
For a further simplified computer in digital circuit, the exponential decay ( ) substitute the PSP kernel evolves ( ) approximately.

ReSuMe architecture and algorithm
An implementation of ReSuMe in the Liquid State Machine (LSM) architecture is proposed as an example 19 . The Liquid State Machine consists of a large, fixed "reservoir" network -the neural microcircuit (NMC) from which the desired output is obtained by training the suitable output connection weights.
In the implementation of ReSuMe method, the original LSM approach has been modified. The modified architecture consists of a set of input neurons , the NMC structure, a set of learning neurons with a total number k and a corresponding set of teacher neurons (see Fig.1). NMC receives signal ( ) from and transforms it into a vector of signals ̂( ) which i is presented to the learning neurons ∈ . The teacher neurons are not directly connected with any other structure.
Since we focus more on the ReSuMe learning implementation itself on this paper, so we generated presynapse to the learning neuron as NMC output. The modification algorithm, which adjusts weights between pre-spike and post-neuron, is applied according to the following simplified equation: The Fig.2 shows a specific weights update process.

Implementation
These section proposed implementation of above methods we introduced above with results showed in waveforms.

Implementation of LIF neuron
We implement LIF Neuron with VHDL language in FPGA. By adjusting the size of dt and matching different time constant, our LIF Neuron can work at very high clock frequency (10 kHz), which means that its calculation accuracy is very high and the real-time requirement is realized.

Implementation of PSP and SRM
As we introduced in 2.2, we use simplified exponential decay to achieve RSM and PSP. Because post neurons are connected to 500 pre-synaptic inputs, hardware resources are still unacceptable if 500 exponential operations (even in linear decay) are performed in the same clock cycle. We adopt time division multiplexing, and use two-stage pipeline to complete 500 sets of PSP operations in 1000 clock cycles (actually 501 cycles, remaining standby, theoretically supporting input ceiling of 999), just using one multiplier and one adder.

Implementation of ReSuMe learning
ReSuMe we implemented includes an exponential attenuation (linear attenuation) which attribute to change of parameter k. Each time teacher input spike or postsynaptic neuron spike arrives, weight is updated. 500 exponential operations are performed. We use 500 clock cycles, time division multiplexing to achieve this change, and update weight in real-time so that PSP and RSM modules are used to calculate the correct weight instantly.

Architecture of ReSuMe learning
The overall hardware architecture of ReSuMe learning is shown in Fig. 6. We use LIF Neuron as post-synaptic neuron, equipped with ReSuMe learning module, and 500 pre-synaptic inputs are connected to post-synaptic neuron. Each connection is operated by PSP and summarized by RSM.

Results and discussion
Firstly, we simulate the actual learning waveform of ReSuMe. As shown in Fig. 7

Conclusion
This paper introduces the advantages of using devices such as FPGA to realize the digital neural network. Methods of LIF neuron, PSP and RSM module as well as ReSuMe learning are described and illustrated on hardware implementation. Then the overall framework of ReSuMe learning is elaborated, with the different output data due to different inputs analyzed. Fig. 4. Waveform of PSP process in which indicates once presynaptic input spikes, the PSP starts an exponential decay.