Computation and implementation of large scalable Spiking Neural Network

. Photonic neural networks are a highly sought-after area of research due to their potential for high-performance complex computing. Unlike artificial neural networks, which use simple nonlinear maps, biological neurons transmit information and perform computations through spikes that depend on spike time and/or rate. Through comprehensive studies and experiments, a strong foundation has been laid for the development of photonic neural networks. We have recently developed a large-scale spiking neural network, which serves as a proof-of-concept experiment for novel bio-inspired learning concepts. This breakthrough is significant because it demonstrates the potential of using photonic neural networks for advanced computing and highlights the importance of incorporating biological principles into artificial intelligence research.


Introduction
The progress in deep neural networks and artificial intelligence has had a profound impact on the processing of information in modern times.Spiking neural networks (SNNs) offer a promising prospect of improving the energy efficiency and latency of deep neural networks (DNNs) by using data-driven asynchronous event processing [1,2].SNNs are often referred to as the third generation of neural networks [3][4][5].Research in neuroscience has revealed that the transmission of information in the brain is primarily encoded in the precise timing of spikes, in addition to the neuron firing rate [1].However, in conventional artificial neural networks (ANNs), information is solely encoded in the neuron firing rate.SNNs are inspired by biological neural networks, where transfer of information takes place via momentary pulses, called action potential or spikes.Unlike traditional neurons in ANNs, spike-events are sparse and capable of encoding temporal information.Considering this cutting edge benefits provided by this, for e.g.hardware friendliness and sparsity induced energy efficiency [6], we have designed and experimentally implemented a photonic liquid state machine (LSM) [7] based on photonic recurrent SNNs [8].Our scalable proof-ofconcept experiment comprises more than 30,000 excitable neurons [9] and is the first large-scale test-bed system for next-generation bio inspired learning concepts in photonic ANNs.

Experimental scheme
A schematic diagram of our experimental setup is shown in Fig. 1.In this depicted figure, it can be observed that, the optical field |   0 | 2 is homogenized by a beam homogenizer and after getting transmitted through the polarizing beam splitter (PBS) and a half waveplate (λ/2), a microscope objective  1 focuses it to illuminate the spatial light modulator (SLM) .Our neural network's state is encoded in the pixels of the SLM, which makes the SLM function as the substrate of our neurons.The half waveplate positioned before  1 is oriented in a way that enables the SLM to operate in the amplitude modulation mode.After being reflected from the SLM, the optical field is transmitted through the polarizing beam splitter (PBS) where it acquires nonlinearity and is subsequently modulated as follows: where,    is the gray scale value of the SLM pixel i,  is gray scale offset and a device constant and   is the conversion factor between the polarization angle and the gray scale values of the SLM pixels.A diffractive optical element (DOE) is introduced in between the PBS and the camera.The spacing between (1) , 13006 (2023) the diffractive orders corresponds to the spacing between the SLM pixels.The beam reflected from the PBS, passes through DOE, establishing recurrent coupling between the photonic neurons.Finally, the camera positioned at the focal plane of the microscope objective  2 , record the neuron response.The signal (i.e.optical intensity) recorded by the camera can be represented as : where,   is the DOE created coupling matrix.Therefore, the recurrent loop is closed when the camera images the SLM pixels (i.e.neurons), since the SLM is driven by the recorded images as electronic feedback.So, the Ikeda map which can be used to define the state of our recurrent neural network (RNN) can be represented as: where the state of the Spatial Light Modulator (SLM), denoted by  +1 , is transmitted to the SLM using the control computer MATLAB.The parameter β represents the feedback strength, while γ represents the strength of the injection signal.The input information sequence, denoted by  +1 , consists of elements that have been normalized to fall within the range of 0 to 1.The injection matrix, denoted by    , is comprised of randomly distributed 0s and 1s.

Results and Discussions
A neuron's all-or-nothing spiking response to input stimuli by emitting a spike corresponds to a dynamical process called excitability.We achieved this functionality using a numerical model expressed by the following equations: ( + 1) = −  * () +  2 ( ( * () +  * ( + 1)) + ), ( + 1) = 0.995 * () + ( + 1), where, Eq. ( 4) represents the dynamical variable ( + 1) and is equivalent to the Eq. ( 3).Whereas, Eq. ( 5) serves as sliding window integration for high pass filter that applies a negative forcing effect on the dynamical variable, as shown in Eq. ( 4).In other words, when at rest, the system resides in its lower stable fixed point, labelled A, see Fig. 2(a).External input then pushes the system across unstable fixed point B, resulting in an excursion to the upper stable fixed point C. There, slow forcing governed by the high-pass dynamics of Eq. ( 5) projects it back to the lower stable fixed point A. This enabled us to demonstrate experimentally all-or-nothing spiking response (see Fig. 2(b)) to the input stimuli.Fig. 2(c) shows the average network response amplitude with increasing injection strength (Gamma) .Fig. 1(d) shows the spatio-temporal response of our network when two MNIST digits are given as input to our system.This shows that the neurons respond differently depending on the average value of the 784 pixels of each digit.

Conclusion
To conclude, our electro-optical system uses excitable neurons, which is the first proof-of-concept system that allows to implement learning in large scale photonic SNNs.Currently, we are training our system on the MNIST datasets for handwritten digit recognition and explore different learning concepts [4].

Fig. 2 .
Fig. 2. (a) Non-linearity curve of the SLM.(b) Shows a characteristic response of one of our 36.000photonic neurons.(c) Average network response amplitude.(d) Spatio-temporal network response with two MNIST digits as input.