Prediction of Hippocampal Signals in Mice Using a Deep Learning Approach for Neurohybrid Technology Applications

: The increasing growth in knowledge about the functioning of the nervous system of mammals and humans, as well as the significant neuromorphic technology developments in recent decades, has led to the emergence of a large number of brain–computer interfaces and neuropros-thetics for regenerative medicine tasks. Neurotechnologies have traditionally been developed for therapeutic purposes to help or replace motor, sensory or cognitive abilities damaged by injury or disease. They also have significant potential for memory enhancement. However, there are still no fully developed neurotechnologies and neural interfaces capable of restoring or expanding cognitive functions, in particular memory, in mammals or humans. In this regard, the search for new technologies in the field of the restoration of cognitive functions is an urgent task of modern neurophysiology, neurotechnology and artificial intelligence. The hippocampus is an important brain structure connected to memory and information processing in the brain. The aim of this paper is to propose an approach based on deep neural networks for the prediction of hippocampal signals in the CA1 region based on received biological input in the CA3 region. We compare the results of prediction for two widely used deep architectures: reservoir computing (RC) and long short-term memory (LSTM) networks. The proposed study can be viewed as a first step in the complex task of the development of a neurohybrid chip, which allows one to restore memory functions in the damaged rodent hippocampus.


Introduction
The hippocampus stands as a cornerstone within the intricate architecture of mammalian brains, wielding profound influence over cognitive processes by encoding and safeguarding new information [1][2][3].Its multifaceted roles extend beyond mere data storage, encompassing pivotal functions in memory consolidation, spatial orientation, and emotional formation [4].Comprising distinct neural regions such as the Dentate gyrus (DG), CA3, CA1, and subiculum, the hippocampal formation orchestrates a symphony of interconnected pathways [5].Primarily, these pathways are intricately woven with inputs from the Entorhinal Cortex (EC) [6], which utilizes a trio of pathways including a trisynaptic pathway and two monosynaptic pathways [7,8].
The trisynaptic pathway consists of the perforant pathway between the second layer of EC to DG [9], the mossy fibers between DG and CA3 [10], and Schaffer collaterals between CA3 and CA1 [11].The monosynaptic pathways are formed between the second layer of EC to CA3 [12] and the third layer of EC to CA1 via perforant pathways [9].CA3 has more recurrent connections compared to the other hippocampal regions [13], a feature that prompted researchers to attribute to it a crucial role in pattern completion and memory storage.The fifth layer of EC receives the afferent projections from CA1 directly and indirectly via the subiculum [14,15] and sends back projections to widespread cortical targets that provided the actual sensory inputs [16].Finally, DG can be viewed as an input from the EC in the perforant pathway.The output pathway in the hippocampus passes through the CA1 area and subiculum, returning to the EC.
Violation of the functional state of the hippocampus (including violation of the perforant pathway) in pathological human conditions leads to a memory deficit that disrupts normal functioning or may be a sign of aging and deterioration of the brain in patients with Alzheimer's disease [17,18].In addition, many other pathologies and injuries lead to damage in the hippocampus, e.g., short-term hypoxia [19], blunt trauma to the head and epileptiform activity [20].Consequently, restoring cognitive function in afflicted individuals has emerged as a paramount global healthcare imperative [21][22][23].Central to this endeavor is the reinstatement of sequential activation within hippocampal cell groups, particularly in the CA3 and CA1 layers, which are synaptically connected and interact with each other during memory encoding [24].Thus, preserving cognitive function in the face of neurodegeneration hinges upon the meticulous orchestration of hippocampal circuitry.
In the dynamic landscape of neuroscience, a vibrant realm of modern neuroprostheses and cutting-edge neurotechnologies has emerged, offering tantalizing prospects for the restoration and augmentation of memory functions [25][26][27][28].Within this realm lie a plethora of innovative systems designed to revive memory functions, notably the hippocampal neuroprosthesis.First of all, Berger, Hampson, Deadwyler and co-authors [28][29][30][31][32][33] developed a multi-channel MIMO (multi-input-multi-output) device, which transforms the recorded activity of hippocampal neurons into patterns of electrical stimulation to expand memory and cognitive functions in rodents and non-human primates in the CA3-CA1 set of the hippocampus in vitro and in vivo.They also tested this approach to improve memory in humans before hippocampal resection for epilepsy [34,35].
Meanwhile, Hogri et al. [36] have ventured into the realm of silicon, unveiling a remarkable very-large-scale integration (VLSI) chip engineered to emulate the fundamental principles of cerebellar synaptic plasticity.With seamless integration into cerebellar input and output nuclei, this chip orchestrates real-time interactions, faithfully replicating cerebellar-dependent learning in anesthetized rats.Adding to this tapestry of innovation, our own contributions have illuminated novel avenues in memory restoration.Through the conceptual fusion of rodent hippocampal activity and memristive devices, we have unveiled a symbiotic relationship between living hippocampal cells and neuron-like oscillatory systems [37][38][39][40][41].This interplay promises not only to redefine our understanding of memory mechanisms but also to inspire transformative interventions in the realm of cognitive enhancement.
In recent years, the integration of artificial intelligence (AI) methodologies has emerged as a cornerstone in neuroscience research, presenting unprecedented opportunities for decoding intricate neural activity and facilitating the precise control of prosthetic devices.Notably, the application of AI technologies to the restoration of hippocampal signals has emerged as a focal point of investigation, drawing substantial attention from the scientific community [42,43].Deep learning algorithms are often applied to various tasks of prediction of biological signals, e.g., sharp-wave ripples [44,45], as well as physiological signals, such as electromyogram, electrocardiogram, electroencephalogram, and electrooculogram (see [46] and references therein).Automatic feature extraction allows us to predict and simulate complex encoding processes in the brain [47].
Against this backdrop, the aim of this study is to propose and validate an innovative approach for the prediction of hippocampal signals, while ensuring ease of implementation on neural chips tailored for neuroprosthetic tasks.Central to our methodology is the utilization of deep neural networks, revered for their prowess in handling complex data and unraveling intricate patterns.Specifically, we focus on two prominent deep learning architectures renowned for their efficacy in time series prediction: the long short-term memory (LSTM) deep neural network and reservoir computing (RC).Through the fusion of this methodology, we aim to construct a framework that not only achieves high accuracy in prediction but also demonstrates practical feasibility on neural hardware.To train our deep networks, we used signals recorded in the CA1 and CA3 regions of the rodent hippocampus, representative of the intricate neural dynamics inherent in memory processing.Through rigorous experimentation and analysis, we compare the predictive capabilities of the proposed models, utilizing a suite of evaluation metrics meticulously crafted to capture the nuanced properties of hippocampal biological signals.
The ultimate goal of this research is not only to make a significant contribution to improving our understanding of neural processes, but also to lay the foundation for revolutionary advances in the field of neuroprosthetics to improve the quality of life of people living with neurological disorders by developing innovative and effective solutions.
The structure of this paper unfolds as follows.In Section 2 we outline our approach and describe the associated pipeline.Within this section, we introduce an innovative method for predicting hippocampal signals based on received biological input based on deep neural networks.Subsequently, in Section 3, we expound upon the dataset encompassing signals extracted from the CA1 and CA3 regions of the rodent hippocampus, elucidating the techniques employed for data preprocessing.Moving forward, Section 4 delves into the architecture of RC and LSTM networks utilized for signal prediction, alongside a comprehensive discussion of the chosen evaluation metrics aimed at gauging result quality.A comparative analysis of diverse deep neural network architectures is presented in Section 5. Finally, in Section 6, we discuss our findings and we draw our conclusions in Section 7.

Description of the Implemented Approach
We propose an intelligent approach to restoring electrophysiological activity in mouse hippocampus slices using deep neural networks.The main idea of our approach can be described as follows.We stimulate the Dentate gyrus (DG) in healthy slices and obtain a subsequent "healthy" response in the form of field excitatory postsynaptic potentials (fEPSPs), first in the CA3 region and then in the CA1 region.In the case of violation of the conductivity of the neural signals along the perforant pathway, e.g., due to a mechanical trauma in the CA3 region, no signal is obtained from the CA3 region and a subtle "unhealthy" signal is received from the CA1 region in response to the DG stimulation according to the chosen protocol.In order to restore healthy activity in the CA1 region, we use the CA3 fEPSP as an input for our neural network to predict the CA1 fEPSP.This approach is schematically illustrated in Figure 1.Our method allows using the predicted signal to simulate healthy activity in damaged brain regions in real-time biological experiments [40].
The pipeline of our study is presented in Figure 2. First, we carried out a series of biological experiments to record fEPSPs in hippocampal slices in two areas (CA1 and CA3) under normal conditions during electrical stimulation of DG.As a result, we obtained a set of paired fEPSPs at different stimulus amplitudes.The detailed fEPSP registration protocol is described in Section 3. Subsequent procedures with the resulting set of signals involve standard preprocessing for biological signals, which consists of the following sequential steps: artifact removal (in the vicinity of 20 points in a specific time period, these points are replaced by those that were encountered previously with a shift of 150 points), 1000 Hz low pass filter processing, standardization, 11-point smoothing (linear approximation) and applying a Gaussian filter with standard deviation for the Gaussian kernel for all axes equal to 35, as well as averaging over all signals recorded for each stimulus amplitude.The sampling rate was 20 kHz.The resulting data containing pairs of averaged CA3-CA1 signals were divided into training and testing samples for further training and prediction of the output signal in the hippocampus using two deep learning architectures.A detailed description of the data collection protocol, as well as the data preprocessing routine, can be found in Section 3. A detailed description of the applied methodology, including the chosen deep learning architectures, as well as evaluation metrics and peculiarities of the training process, is presented in Section 4.  The novelty of the proposed approach is as follows.Unlike previous studies of other research groups, where multi-channel signals were used, in our work, we used signals from a single electrode.In addition, this is the first attempt, to the best of our knowledge, of using different deep learning architectures instead of oscillatory systems and electronic circuits.

Ethics Statement
In this study, the protocols of all experiments were reviewed and confirmed by the Bioethics Committee of the Institute of Biology and Biomedicine National Research of the Lobachevsky State University of Nizhny Novgorod.

Data Collection
We collected a dataset containing fEPSPs recorded from hippocampal slices on the C57BL/6 mouse line of 2-3 months of age.
The fEPSPs were recorded using glass microelectrodes (Sutter Instruments Model P-97) from the Harvard Apparatus, Holliston, MA, USA (Capillares 30-0057 GC150F-10, 1.5 OD × 0.86 IK × 100 Lmm, Quantity: 225).Stimulation and registration electrodes were placed in the DG region and CA3 and CA1 regions, respectively.The electrodes were installed in the holder of a HEKA EPC 10 USB PROBE 1 preamplifier head connected to a HEKA EPC 10 USB biosignal amplifier.A stainless-steel bipolar electrode was used for electrical stimulation of the DG area in the hippocampus.The electrodes were filled with the ACSF solution.The pipette tip resistance was 4-8 MΩ.We applied a 50 µs stimulation signal whose amplitude was increased from 100 to 1000 µA with a 100 µA step.
A stimulating electrode was placed in the DG area and sent electrical square current pulses to activate the cells in the DG area of the hippocampus.Recording electrodes were installed in the pyramidal neuron dendrites of CA3 and CA1, the hippocampus areas.This made it possible to record the activation of the perforant pathway in the hippocampus due to electrical stimulation.The following shows the original trace of the fEPSP recorded in the dendrites of the CA3 and CA1 hippocampus areas.A similar protocol is presented in our previous works [37,39,40,48,49].Data visualization and recording were performed using Patchmaster software.
When collecting biological data from real experimental animals, the individual differences of each animal should be taken into account.However, averaging data from at least three experimental animals, particularly from the brain sections of experimental animals (with n not less than 5), is relevant for further analysis, including the application of machine learning.Furthermore, when collecting biological data, the minimum number of animals should be used, following bioethical considerations.In our study, a dataset of 76 traces with nine hippocampal sections obtained from five experimental animals was used.

Data Preprocessing
At the first step of data analysis, all obtained fEPSP signals were preprocessed to remove artifacts [50].Since the stimulation signal in all experiments had the same duration and were applied at the same moment, we removed the related artifacts from our data by excluding 20 time points corresponding to the stimulus.
Then, noise was filtered by a Gaussian filter with σ = 35 and radius r = 4σ.After that, we normalized the data by converting them into a dataset with zero mean and unit variance.The data was then separated into inputs and outputs for the models.
In order to obtain signals for training, we divided all signals into six groups with different values of the stimulus amplitude.All signals inside each group were then averaged, except one used as a test signal.This allowed us to exclude the signal dependence on the experimental conditions, such as the inclination and exact location of the electrode insertion, the exact insertion depth, etc.
The data used in the model as input and output signals consisted of 1500 time samples recorded, respectively, from the CA3 and CA1 regions.The optimal number of time samples in the input data was determined experimentally during the optimization process.

Methodology
The deep learning architectures were developed for fEPSP signal prediction in the CA1 region of hippocampus in laboratory mice using CA3 signal as an input.The choice of the optimal deep learning architecture in our task is determined by the following factors.Since the datasets are relatively small, we needed deep learning architectures that do not contain too many parameters (trainable weights) in order to avoid training problems.From this point of view, LSTM and RC suit better than, for instance, convolutional deep networks and transformers.Preliminary studies have shown that such architectures are prone to underfitting in this task.Therefore, LSTM and RC were selected for further studies.
In this section, we will first describe the architecture of the proposed LSTM (Section 4.1) and RC (Section 4.2) deep neural networks.Then, in Section 4.3, we will present evaluation metrics used to qualify the prediction.Finally, in Section 4.4, we will discuss some features of the training process.

Long Short-Term Memory Network
The LSTM network, a type of recurrent network, can be viewed as the first choice deep learning architecture for time series prediction due to its ability to learn long-term dependencies [51].The basic unit in LSTM networks is called a LSTM cell, which remembers values for arbitrary time intervals, while three filters regulate the information flow into and out of the cell [52].This feature arises with the help of three logic gates: input gate, output gate, and forget gate.Each of these gates obtains input from a sigmoid activation function (s).Here and later we use the following definitions: Wt j is a weight matrix, bs j is a bias vector (j = f , i, c, o) and x t is a vector containing input values (the subscript t indexes the time step).
Forget gates decide what information to discard from a previous state of the cell by assigning a previous state, compared to a current input, a value between 0 and 1.This gate is defined as where h is the number of LSTM cell's units.Input gates decide which pieces of new information to store in the current state, using the same system as forget gates.The functionality of the input gate can be described with two parts: i t and ct .Here, i t is information we want to remember, which we compute using a sigmoid function (s) as follows: (2) To create a vector of new updated values ct , we use a hyperbolic tangent function (σ), i.e., Then, the state of the old cell c t−1 is replaced by a new cell state by removing the information generated by the forget gate in Equation (1): where c t is the updated cell state.Here, the operator ⊙ denotes the Hadamard product (element-wise product).Finally, output gates control which pieces of information in the current state to output by assigning a value from 0 to 1 to the information, considering the previous and current states.The output is passed from a sigmoid layer and then a hyperbolic tangent layer to classify.
where o t ∈ (0, 1) h is an output gate's activation vector and h t ∈ (−1, 1) h is a hidden state vector, also known as an output vector of the LSTM unit, and the superscript h refers to the number of hidden units.Note that sigmoid activation function (s) suppresses the values in the interval (0, 1), while the hyperbolic tangent activation function (σ) suppresses the values in the interval (−1, 1).
After the above steps, the cell state is updated.Lastly, the output of the current state is computed by taking the values of the updated state of the cell state and also values from the sigmoid layer that determine the components of the cell state that need to be included in the output.
The architecture of LSTM network used in our study is shown in Figure 3.We used bidirectional version of this recurrent network, it makes a forward pass through time and a backward one with different hidden states and learnable matrices.The model includes a linear input layer of size 1 × 200 with the Relu activation function.There are also two LSTM cells, each containing 200 neurons, and the output from these cells, after dropout layer with probability to replace some random elements equaling 0.2 in training, goes to a linear layer with a weight matrix of size 400 × 1.This final output allows us to obtain the predicted value.Also, we used 1500 steps as length of the input window (the length before we bring hidden state to zero).Due to their complicated structure, state-of-the-art LSTM networks include a huge number of model parameters, typically exceeding the usual on-chip memory capacity.Therefore, network inference and training will require parameters to be passed to the processing unit from a separate chip for computation, and inter-chip data communication severely limits the performance of LSTM-based recurrent networks on conventional hardware.

Reservoir Computing
Reservoir computing (RC) is a computational framework that is also based on recurrent neural networks [53][54][55].A typical RC network includes reservoir and readout layers, as shown in Figure 4. Here, the reservoir network consists of randomly connected neurons, which can generate high-dimensional transient responses to a certain input (reservoir's states).These high-dimensional reservoir's states are then processed with a simple readout layer, generating the output.During RC training, the weight connections between neurons in the reservoir remain untrained, and only the readout layer is trained (e.g., using linear regression) [56,57].Compared to other DNNs, RC exhibits the same or even better performance and requires substantially smaller datasets and training time [58,59].In our study we employ the RC network as follows.The reservoir layer is a set of leaky integrate-and-fire (LIF) neurons with random connections between them [57].The reservoir contains three sequential transformations: matrix W in (input transformation-the input vector is multiplied by this), matrix W (the neurons themselves) and matrix W f b (output transformation W).All values in the W in and W f b matrices are initialized randomly according to Bernoulli's law; the values of the connections in matrix W are distributed according to the normal law.We used the following network parameters: size of matrix W was equal to 300 × 300, leakage lr = 0.6, spectral radius of matrix W was sr = 0.1 and matrix connectivity for W was equal to 0.9.As neuron activation function we choose a hyperbolic tangent function.The output from reservoir layer is transferred to readout layer, which is a ridge regression layer in our architecture.The regularization parameter was set to λ = 10 −7 .
Compared to other deep learning frameworks, RC has several advantages: a fast training process, simplicity of implementation, reduction of computational cost, and no vanishing or exploding gradient problems [60].Moreover, RC networks do have memory and, for this reason, are capable of tackling temporal and sequential problems.These advantages have attracted increasing interest in RC research and many of its applications since its conception.Another valuable feature of RC is its relatively biological plausibility, which implies its potential for scalability through physical implementations [61].
Additionally, the same reservoir can be used to learn several different mappings from the same input signal.In our case, a single reservoir can be used, e.g., both for prediction of activity in the CA1 region based on fEPSP signals from the CA3 region and simultaneously for classification of the fEPSP signals into two classes: signals from healthy hippocampus and signals from damaged hippocampus.To carry this out, one simply trains one readout function for the prediction task and another for the classification.In the inference mode, the reservoir is driven by an fEPSP signal, and both readout functions can be used simultaneously to obtain two different output signals from the same reservoir [59].

Evaluation Metrics
To assess the quality of the predicted signal, we used two evaluation metrics.The first one is Mean Absolute Percentage Error (MAPE), which is widely used for time series forecasting [62]: where y i is the true value and ŷi is the predicted value.
The second one is a custom metric based on several fEPSP parameters that describe the functional properties of the neuron groups that directly generate the signal.We include the following signal parameters: rise time, decrease time, response time halfwidth, amplitude and slope, as shown in Figure 5. Namely, we define the evaluation metric as the combination of squares of slope, response time halfwidth, rise time and decay time with corresponding weights: 0.05, 0.4, 0.3, 0.05 and 0.2.Also, we normalized values of each part of metric to be in range from 0 to 1.The choice of the evaluation metric was specified by biological background of the problem.

Training Process
To train the neural network, the averaged fEPSP from the CA3 hippocampal area (input signal) was presented in response to different electrical stimulus amplitudes (from 100 to 1000 µA).The predicted output fEPSP for the CA1 hippocampal area was obtained as an output signal for further use in a real-time biological experiment.
During training we optimize Mean Squared Error (MSE) loss function, which is where n is the number of training sequences, y i is a correct sample and ŷi is a predicted sample.
The model was trained on ten preprocessed records.We used cross-validation approach with the following partition scheme: in each iteration nine records were used for the training set, and one for the test set.
To optimize the learning rate we reduced it when a loss has stopped improving for 10 epochs.Total number of epochs was chosen to be 50.In order to perform hyperparameter search, we used grid search procedure.For LSTM, we considered the following hyperparameters: number of neurons in hidden state, number of layers, dropout probability and length of the input window.For RC we considered the following hyperparameters: number of neurons in reservoir, matrix connectivity, leakage and spectral radius.Optimization of chosen hyperparameters is widely spread and is employed in numerous studies [63].

Experiments and Comparisons of Deep Learning Architectures
In this section, we present experimental results and compare two different learning architectures: the first architecture is based on a traditional feedforward neural network, while the second architecture utilizes a convolutional neural network (CNN).We analyze the performance of each architecture in terms of accuracy, training time, and computational complexity.

Exploratory Data Analysis
The signal we utilize contains data from a single electrode, as opposed to previous studies that employed MIMO techniques [27].To extract the maximum information from this signal, we conduct exploratory data analysis to study and summarize the main characteristics of the obtained fEPSP signal datasets, as shown in Figure 6.As observed, most features exhibit a smooth dependence of the mean value on the stimulus amplitude.However, outliers are present in all cases, indicated by the blue dots in the figure .Note that, for some parameters, such as the decrease time (Figure 6b) and slope (Figure 6e), the variance is quite limited, except in the case of 100 µA amplitude stimuli.However, for other parameters, like response halfwidth, the variance is generally sufficient in almost all cases (Figure 6c).
The variance for rise time (Figure 6a) and amplitude (Figure 6d) strongly depends on the amplitude of the stimulus.We can distinguish groups of signals with relatively small dispersion in these parameters and others with a much stronger dispersion.Additionally, it is noteworthy that the CA3 and CA1 signals can also be statistically distinguished.
Using principal component analysis (PCA), we identified the most significant features of the signals.Based on the results of this analysis, weight coefficients were selected in a custom metric (see Section 4.3), which was subsequently used to assess the quality of the predicted signal.
Analyzing the statistical properties of these parameters leads us to consider the possible existence of clusters in the data.Identifying such clusters could help us understand qualitative changes in the slice's response to the stimulus as the stimulation amplitude changes, which could have implications for deep learning algorithms.
To test this hypothesis, we employ the hierarchical density-based spatial clustering of applications with noise (HDBSCAN) algorithm, which allows us to identify clusters without prior knowledge of their quantity.
As shown, the method identified three clusters (classes) in the data.The first class comprises the noisiest signals.In the second class, the response begins with a short-term and small amplitude increase in the fEPSP level, while the third class exhibits the most characteristic response to a rectangular pulse.Examples of all signal types are depicted in Figure 7.  Considering the choice of evaluation metrics and the design of the learning process, we anticipate that our deep neural networks will provide less accurate predictions for signals belonging to Classes 1 and 2. Lower prediction quality for highly noisy signals (Class 1) is expected.The difficulty in predicting signals of a specific shape belonging to Class 2 arises from their unique response form and their relatively infrequent occurrence.Improving the accuracy of prediction for such signals requires further investigation.

Choosing Optimal Deep Learning Architecture for fEPSP Signal Prediction
In order to select the optimal architecture for fEPSP signal prediction in the CA1 region based on the CA3 signal, we have compared two deep learning architectures: LSTM and RC. Figure 8 shows the example of true and predicted CA1 signals for LSTM (Figure 8a) and reservoir (Figure 8b) for stimulus amplitude equal to 400 µA.All true and predicted signals for all stimulus amplitudes are presented in Appendix A.  Comparison of the values of evaluation metrics for predicted signals highlights the ability of the two networks to capture and mimic certain features and characteristics of signals.As one can see from Figure 9, in general, both deep learning architectures allow quite accurate prediction to be obtained.According to the MAPE metric, both networks showed approximately the same result (Figure 9a), except for the case of the 1000 µA stimulus.Nevertheless, taking into account our custom weighted metric shows that in this case LSTM is able to predict valuable features of the signal quite exactly.RC in this case shows worse but still acceptable results.The custom weighted metric can also distinguish amplitudes, for which each network shows better accuracy (Figure 9b).Let us analyze how RC and LSTM deep neural networks predict specific features of signals that we take into account in our custom metric separately.In order to carry this out, we plot the true and predicted values of rise time, decrease time, halfwidth, amplitude and slope of the CA1 fEPSP signals, see Figure 10.As one can see from Figure 10a,b, both deep learning architectures predict the rise time and decrease time of the fEPSP CA1 quite accurately.Moreover, RC shows better accuracy compared to the LSTM.Nevertheless, LSTM copes better with the prediction of the halfwidth of fEPSP CA1 signals in most cases, while RC fails to predict this parameter for high stimulus amplitudes (800-1000 µA), see Figure 10c.Figure 10d shows the results for the true and predicted amplitude of the CA1 fEPSP signals.As one can see, for lower amplitudes of stimuli (100-300 µA), both deep networks show good accuracy, while for higher amplitudes of stimuli the prediction error is growing and LSTM shows slightly better accuracy than RC.As for the slope parameter, both networks shows approximately the same result, see Figure 10e.
Predicted signal quality metrics show how accurately the biological signal was predicted by the neural network.This will make it possible to use the predicted signal as accurately as possible to restore activity in the damaged area in the mouse hippocampal slice in real time.This predicted signal will be sent to the stimulating electrode in real time when the fEPSP is recorded after the destruction in the CA3 area.Maximizing the quality of the predicted signal will allow us to most effectively "add" activity to the hippocampal slice to produce an output signal in the CA1 area similar to that recorded in the "healthy" hippocampus.
To summarize, RC is able to predict CA1 fEPSP signals more accurately than LSTM on average taking into account different signal features separately.Also, RC has several other valuable advantages.First of all, it allows studying the inner structure of the reservoir that should statistically reflect features of neurotransmission in the slice for optimal parameters of the reservoir, namely, for an optimal number of neurons in the reservoir and the probability of couplings between them.In fact, the reservoir can be viewed as a dynamical system consisting of identical LIF neurons with couplings between them which are specified according to a certain statistical distribution.Thus, the reservoir simulates neuronal activity in the slice on a mesoscopic level.Second, the simple RC architecture allows its implementation in a microchip, where each element has more than two states.Since the reservoir is untrainable and only the last (output) layer is trained, this architecture is easy to implement in the form of spiking neural networks.This property makes RC the optimal deep learning architecture in the task of the prediction of hippocampal signals in application to neurohybrid implantable chips.

Discussion
In recent years, deep learning has become the most popular computational approach in the field of machine learning, allowing it to achieve exceptional performance on a variety of complex cognitive tasks that match or even exceed human capabilities [64][65][66].Artificial neural networks could have billions of simulated neurons connected in various topologies to perform complex tasks, such as object recognition and learning beyond human capabilities [67][68][69].Deep neural networks have profoundly revolutionized the field of information technology, leading to the development of more optimized and autonomous artificial intelligence systems [70][71][72].The main benefits of deep learning are the ability to learn huge amounts of data and to predict complex dynamics without any model of the system behind it when only values of the observable variable are available [73][74][75].
RC, which was chosen as the optimal deep learning architecture in this task, is a powerful method that can be used to successfully predict sufficiently irregular and chaotic time series data [76].While not as flexible as other methods based on recurrent neural networks, RC has a number of properties that make it a good method of choice for these kinds of tasks [77].The ability to set the macro-scale properties of the network and the quick training through linear regression couple together to allow the RC to be trained and deployed quickly and easily.Additionally, the RC has been shown not only to give good predictions but also to react like a physical model to perturbations in the system state [78,79].RCs can also be scaled up to large systems while being accelerated on dedicated hardware for vast speedups [80], thus making them an exciting candidate for the simulation of high-dimensional biological signals.Thus, it came as no surprise that RC architectures are widely used in the analysis and prediction of EEG and other types of neural activity in a wide range of studies from epileptic seizure prediction [81] and sleep pattern analysis [82] to brain-computer interfaces [83].The ease of implementation on different types of hardware [84][85][86][87][88] and simple learning process also make RC a suitable architecture for neural chips.Another advantage of RC is that it can be used with other input and output layers in forthcoming in vivo experiments.
Neuromorphic devices and platforms are currently being actively developed that combine artificial intelligence and the principles of brain cell operation [89][90][91].The most impressive work is aimed at developing neuromorphic systems, including miniature ones, that can be used for neuroprosthetics and neurorehabilitation [92][93][94].Of course, it is worth noting the work of Elon Musk on the development of neurochips for the tasks of restorative medicine [95][96][97].However, the widespread development of such studies and a thorough study of each stage in them are the most important tasks for the subsequent ability to apply the results of these studies in clinical medicine.
There are a huge variety of neurohybrid chips, including those indicated in the introduction of our paper [10][11][12][13][14]17].In addition, neuromorphic chips can implement the architecture of artificial neural networks [98], as well as hybrid architectures, which are a combination of artificial neural networks with spiking neural networks.For instance, the Tianjic chip has a neuromorphic processor and a computing module based on it, the architecture of which is adapted to the operation of both classical artificial neural networks and pulsed ones, which in their operating principle are closer to biologically relevant neural networks.The chip contains more than 150 cores, each of which consists of artificial analogues of an axon, a synapse, a dendrite and a perikaryon, which allows simulating real neuronal activity.In this case, the cores can switch between two operating modes, as well as convert signals from a classical neural network with a certain value into binary nerve impulses for a pulsed neural network and vice versa [99].Thus, the approach to the development of neurohybrid chips and neuromorphic devices is an extremely urgent task.
Being implemented on a neurohybrid system, the described deep architectures can provide the restoration of memory functions in the damaged rodent hippocampus.Note that different promising architectures, such as memristor-based chips, can be used as a hardware platform in this task [41].
Compared to previous studies [27], we use a single-channel signal from hippocampal slices, which contains less information than a multi-channel signal, but is much easier to obtain in a biological experiment and requires less sophisticated (and much cheaper) electrodes.To obtain as much information as possible from this simple signal, we employ different deep learning architectures (RC and LSTM) instead of oscillatory systems and electronic circuits.The methodology is for the deep learning algorithm to automatically extract the characteristic description and features of the signals and create on that basis some intrinsic representation of the neural network that would correspond to the process of signal transformation during passing along the trisynaptic pathway.
Although the proposed investigation is limited to rodent studies, it is highly likely that our approach will be applicable to other types of acquired brain injury.Such studies are extremely important for the treatment of neurodegenerative diseases associated with memory impairment and have high prospects for future use in practical medicine.

Conclusions
In the presented study we have proposed and tested the approach based on deep learning for restoring the activity in damaged hippocampus in mice in vitro in hippocampal slices.In order to do this, we have compared two deep neural architectures in the task of the prediction of fEPSP signals in the CA1 region of the hippocampus based on fEPSP signals in the CA3 region as an input.
To assess the performance of the proposed deep learning model, we have used two evaluation metrics: MAPE and custom complex evaluation metrics, which allowed us to take into account valuable features of the signals.The accuracy of training has shown how well the deep learning model fitted the features of the response, such as slope, halfwidth of the response time, rise time and decay time, for a given stimulus amplitude.In addition, the prediction accuracy has demonstrated how well the deep learning model predicted the synaptic parameters for the data that were excluded from the training set.We have compared the results of the prediction for averaged and non-averaged signals.While the averaged signals allowed us to deduce the universal form of the response for each stimulus, using the non-averaged signals implied that a similar procedure can be performed by our deep neural model itself.
Thus, using these metrics, we have compared the performance of two deep architectures, LSTM and RC, with our data.Our numerical experiments have shown that RC demonstrated better accuracy both on average and in predicting several important signal characteristics.The results of our study allowed us to conclude that RC is the optimal deep architecture in the proposed approach, and it is very promising for further implementation on a neural chip.

Figure 1 .
Figure 1.(a) Scheme of the experiment on recording field excitatory postsynaptic potentials (fEPSPs) in mice hippocampal slices and scheme for training neural networks (LSTM or reservoir).The left panel shows the mouse hippocampal slices with a protocol for installing, recording and stimulating electrodes for electrical stimulation and subsequent synaptic transmission activation in the perforant (3-synaptic) pathway of the hippocampus.A stimulating electrode was placed in the DG area and sent electrical square current pulses to activate the cells in DG area of hippocampus.Recording electrodes were installed in the pyramidal neuron dendrites of CA3 and CA1, the hippocampus areas.This made it possible to record the activation of the perforant pathway in the hippocampus due to electrical stimulation.The following shows the original trace of the fEPSP recorded in the dendrites of CA3 and CA1 hippocampus areas.These signals were fed to the input of LSTM or reservoir for training.The right panel shows the architectures of the neural networks used.After training these neural networks with fEPSP signals recorded from the CA3 region in hippocampal slices (input signals), predicted signals for the CA1 region (output signals) were obtained.(b) Representative examples of original fEPSP traces at 400 µA and 500 µA stimulus amplitudes.

Figure 2 .
Figure 2. Pipeline for data processing which includes four main steps.

Figure 3 .
Figure 3. LSTM architecture used for prediction of fEPSP signal prediction in CA1 region using CA3 signal as an input.

Figure 4 .
Figure 4. Reservoir architecture used for fEPSP signal prediction in CA1 region using CA3 signal as an input.

Figure 6 .
Figure 6.Boxplots for main features used in custom metric of CA3 signals (left panel) and CA1 signals (right panel) obtained as a response to short rectangular electrical pulse of varying amplitude.(a) Rise time, (b) decrease time, (c) halfwidth, (d) amplitude, (e) slope.Black dots are outliers.

Figure 7 .
Figure 7.Typical examples of signals belonging to different classes.(a) Signals belonging to Class 1, (b) signals belonging to Class 2, (c) signals belonging to Class 3. See detailed description in the text.

Figure 9 .
Figure 9. Evaluation metrics for predicted fEPSP signals.(a) MAPE, (b) custom metric based on valuable properties of biological signal.

Figure 10 .
Figure 10.Comparison of prediction quality for LSTM (left panels, red dots) and RC (right panels, black dots) for different custom metric parameters: (a) rise time, (b) decrease time, (c) halfwidth, (d) amplitude and (e) slope.

Figure A4 .
Figure A4.True and predicted fEPSP signals at 500 µA stimulus amplitude.Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.

Figure A5 .
Figure A5.True and predicted fEPSP signals at 600 µA stimulus amplitude.Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.

Figure A6 .
Figure A6.True and predicted fEPSP signals at 700 µA stimulus amplitude.Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.

Figure A7 .
Figure A7.True and predicted fEPSP signals at 800 µA stimulus amplitude.Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.

Figure A8 .
Figure A8.True and predicted fEPSP signals at 900 µA stimulus amplitude.Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.

Figure A9 .
Figure A9.True and predicted fEPSP signals at 1000 µA stimulus amplitude.Blue marker corresponds to true CA1 signal, red marker to LSTM-predicted signal, black marker to RC-predicted signal.