Next Article in Journal
Dynamic Virtual Fixture Generation Based on Intra-Operative 3D Image Feedback in Robot-Assisted Minimally Invasive Thoracic Surgery
Next Article in Special Issue
Novel near E-Field Topography Sensor for Human–Machine Interfacing in Robotic Applications
Previous Article in Journal
Formal Assessment of Agreement and Similarity between an Open-Source and a Reference Industrial Device with an Application to a Low-Cost pH Logger
Previous Article in Special Issue
Human–Robot Interaction Using Learning from Demonstrations and a Wearable Glove with Multiple Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks

Electrical Engineering Department, Faculty of Engineering, University of Santiago of Chile (USACH), Av. Víctor Jara 3519, Estación Central, Santiago 9170124, Chile
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(2), 491; https://doi.org/10.3390/s24020491
Submission received: 26 December 2023 / Revised: 11 January 2024 / Accepted: 11 January 2024 / Published: 12 January 2024

Abstract

:
This paper investigates spiking neural networks (SNN) for novel robotic controllers with the aim of improving accuracy in trajectory tracking. By emulating the operation of the human brain through the incorporation of temporal coding mechanisms, SNN offer greater adaptability and efficiency in information processing, providing significant advantages in the representation of temporal information in robotic arm control compared to conventional neural networks. Exploring specific implementations of SNN in robot control, this study analyzes neuron models and learning mechanisms inherent to SNN. Based on the principles of the Neural Engineering Framework (NEF), a novel spiking PID controller is designed and simulated for a 3-DoF robotic arm using Nengo and MATLAB R2022b. The controller demonstrated good accuracy and efficiency in following designated trajectories, showing minimal deviations, overshoots, or oscillations. A thorough quantitative assessment, utilizing performance metrics like root mean square error (RMSE) and the integral of the absolute value of the time-weighted error (ITAE), provides additional validation for the efficacy of the SNN-based controller. Competitive performance was observed, surpassing a fuzzy controller by 5% in terms of the ITAE index and a conventional PID controller by 6% in the ITAE index and 30% in RMSE performance. This work highlights the utility of NEF and SNN in developing effective robotic controllers, laying the groundwork for future research focused on SNN adaptability in dynamic environments and advanced robotic applications.

1. Introduction

The development of the smart factory paradigm, as proposed by Industry 4.0, demands technologies capable of adapting and evolving to meet the growing needs of the industry [1]. To achieve this, increasingly intelligent robotic systems are required, capable of learning new tasks with greater skill and autonomy in decision making. Such systems evolve into self-aware and self-adapting systems for new products and manufacturing processes [2].
One of the main objectives pursued in this context is enabling robots to perceive the environment and act in the real world almost as efficiently and autonomously as humans [3]. This action requires optimizing processing in such a way that robotic systems can integrate a large amount of information, take the corresponding control action in real time, and self-adapt to changing environmental conditions.
These needs are not always easy to fulfill with traditional control strategies, which employ numerical methods based on kinematics and dynamics equations. Typically, these methods are designed for a specific purpose or task and encounter difficulties adapting to scenarios with changing conditions.
On the other hand, one of the alternatives adopted is controllers based on conventional Artificial Neural Networks (ANN). In [4], the authors develop an adaptive neural network control method to achieve effective trajectory tracking for robotic arms. The use of neural networks helps handle uncertainties within the control system and external interference, resulting in improved overall performance, as validated by experiments.
The article presented in [5] presents a vision-based robot control method that integrates convolutional neural network (CNN) object detection, specifically focusing on robots with an eye-in-hand configuration. The approach employs real-time CNN detectors like Yolo to generate task variables based on bounding box parameters, addressing chattering with LSTM. The vision-based controller ensures stable object tracking within the camera’s field of view, as verified by experimental results.
A self-tuning PID controller based on ANN is proposed in [6] combining traditional PID control with artificial intelligence for robust performance. The neural network enables dynamic adjustments to PID controller parameters, particularly useful for systems with time delays and transportation lag. However, concerns about the vanishing gradient issue with the sigmoid activation function used in the experiment are highlighted, suggesting the exploration of alternative activation functions for more efficient training.
A Bio-inspired Intelligent Industrial Robot Control System (BIIRCS) is proposed in [7], using deep learning methods for effective control of robots. The system incorporates a bio-inspired neural network to model complex environments and guide a team of robots for coverage tasks, imitating the human brain’s ability to process visual information and adapt to dynamic environments.
In [8], training of convolutional neural networks is conducted on the MobileNetV2, ResNet50, and DenseNet121 architectures, with the addition of a Squeeze-and-Excitation (SE) block also for visual information processing and robotic control. The capsular convolutional neural network (CapsNet) is analyzed in [9] for robotic control, where the authors developed two modifications of 1D-CapsNet and Windowed Fourier Transform (WFT)—2D-CapsNet.
Despite attempting to mimic the functioning of biological neurons, ANN-based implementations are either slow or consume significant amounts of energy compared to the human brain. ANN are designed to address problems with well-structured data, typically using standard analog representations for neuronal activity. They lack the ability to bridge the gap between biological neuronal encoding and movement coordination, thus foregoing any attempt to establish biological analogies [10].
Controllers inspired by the functioning of biologic systems have also been developed, as presented in [11,12], where the optimization of a neuroendocrine PID controller based on the Adaptive Safe Experimentation Dynamics (ASED) method is proposed. The neuroendocrine PID controller used is inspired by the ultra-short feedback regulation mechanism of the endocrine system in the human body, achieving a combination of a data-driven mechanism, which utilizes runtime data to optimize controller parameters, and a bio-inspired tuning algorithm, serving as a guide for optimizing the data-driven controller.
In [13], an intelligent controller called the Brain Emotional Learning-Based Intelligent Controller (BELBIC) is proposed. Inspired by brain emotional learning, the BELBIC is designed to address the lack of knowledge regarding nonlinearity and uncertainties in cable-driven robot models. It is based on a computational model comprising four main subsystems: Amygdala, Orbitofrontal Cortex, Sensory Cortex, and Thalamus. Validated on a cable-driven parallel robot, the BELBIC demonstrates effectiveness in trajectory tracking and maintaining positive cable tensions, eliminating the need for calculating the Jacobian matrix and forward kinematics in the feedback loop.
With recent advances in artificial intelligence, new perspectives have emerged in control strategies, aiming to achieve performance more analogous to the human brain [14]. In biological systems, information is processed using relatively small populations of spikes and their precise synchronization, sufficient for driving learning and behavior.
Therefore, spiking neural networks (SNN), regarded as the third generation of neural networks, offer a promising solution to the control challenges in robotics by closely emulating the operational mechanisms of the brain with enhanced biological accuracy, making them the most realistic models of brain function to date [15].
SNN employ pulse coding mechanisms, which allow them to incorporate spatio–temporal information, enabling accurate time modeling and acquiring information with greater accuracy. In addition, they can encode large amounts of information in the relative timing between spikes, leading to the possibility of faster and more efficient implementations [16].
Due to the use of discrete events in processing, SNN compute a single response across multiple time steps, making them less efficient on standard synchronous computer hardware but potentially more effective on specialized neuromorphic hardware [17]. This specialized hardware, comprising asynchronous and event-driven circuits, guides the design of building blocks for hardware solutions, particularly advantageous for robotic platforms [18].
The most appealing features of these networks for robotic applications are their adaptability, self-learning capabilities, and low power consumption. Additionally, they offer faster processing speed and quick response to external stimuli, enabling greater autonomy and swiftness in robots.
Similar to how neural networks in the brain acquire information through electrical impulses or action potentials, adjusting synaptic strengths or the weight of interconnections based on the duration of these action potentials, SNN operate using discrete time-based events. These events are represented as binary electrical spikes, which convey information throughout the network via a spikes train.
The generation or absence of these spikes is determined by the conditions established in the employed neuron model, which typically possesses a set of tunable parameters (transmission delays, synaptic weights, post-spike response and stimulation threshold). The operation of the network depends on these parameters. Several spike-based neuron models are available, and the choice among them is typically influenced by a trade-off between their biological realism and their computational efficiency.
Learning mechanisms, similar to traditional ANN (Artificial Neural Networks), can be categorized into unsupervised learning, supervised learning, and reinforcement learning. Among these, spike-timing-dependent Plasticity (STDP) is one of the most commonly used mechanisms, closely related in biology to Hebb’s postulate: “Neurons that fire together, wire together” [19].
Within the scope of this study, the essential characteristics and components of SNN are addressed. A review of the current state of knowledge in this field is conducted, examining the neuron models and learning mechanisms employed in SNN development. Furthermore, various implementations of SNN in robot control applications are analyzed. Subsequently, a novel spiking PID controller is designed and simulated, employing the NEF philosophy. Finally, a comparative evaluation of the results obtained with the designed controller is established in relation to the performance achieved by a conventional PID, a fuzzy controller, and an ANFIS (adaptive neuro-fuzzy inference system) controller.
The major contributions of this work are described below:
  • This research introduces and explores the application of spiking neural networks in robotic controllers, providing valuable insights into their efficacy when compared to conventional controllers;
  • Leveraging NEF principles, this research showcases the design and simulation of a novel spiking PID controller for a 3-DoF robotic arm, highlighting NEF’s adaptability in developing bio-inspired control systems;
  • Competitive performance was obtained, surpassing a fuzzy controller by 5% in terms of the ITAE index and a conventional PID controller by 6% in the ITAE index and 30% in RMSE performance;
  • Emphasizing the substantial potential of SNN within the NEF, this study positions SNN as a promising and versatile choice for advanced robotic applications. This contributes to the domain of industrial robotics through a quantitative analysis, offering a clear understanding of the controller’s practical effectiveness.
The structure of the work comprises Section 2, where the state of the art in the use of SNN for robotic control applications is analyzed. Section 3 covers the fundamentals of SNN, exploring the most commonly used neuron models and learning mechanisms. NEF is examined in Section 4. The development of the controller and the obtained results are addressed in Section 5. Section 6 corresponds to the conclusions.

2. SNN in Robotic Control

The challenge of controlling an n-DOF (degrees of freedom) robotic arm is closely tied to establishing the relationship between the robot’s configuration space and the Cartesian space. This relationship enables the positioning of the end effector through the corresponding joint motions, thereby facilitating the execution of the desired movement.
The inverse robot kinematics problem is usually addressed through numerical optimization, as a given end effector position can be achieved through multiple joint configurations. Recent studies suggest the use of SNN to approximate the inverse kinematic model of the robot and execute the control action.
In [20], the design of a three-layer SNN (encoding, learning, and readout) is examined, which can estimate the kinematic properties of a 4-DoF robotic arm. The network takes as inputs the initial positions of the joints and the displacement of the end effector’s relative position. The learning is achieved using a supervised learning rule.
A similar approach is employed in [21] to solve the inverse kinematics of a 6-DoF robot. In this case, an additional signal is introduced to inhibit model learning once a certain precision threshold is reached. The authors implement online learning, where weight adaptation takes place in real time, which proves advantageous in dynamic and disturbed environments. Furthermore, the authors incorporate a PID controller, also designed using SNN.
When dealing with highly redundant robots and those with numerous degrees of freedom, finding the solution to inverse kinematics becomes an even more complex task, as demonstrated by the works presented in [22,23]. A variant of a recurrent SNN, known as LSNN (Long Short-Term Memory spiking neural network), trainable through error gradients, was introduced in [24,25]. These works demonstrated the competitiveness of LSNN compared to other similar second-generation networks like LSTM (Long Short-Term Memory). In [26], the authors propose the use of this recurrent SNN to learn the kinematic model and exert control over a trunk-type robotic arm. They showcased its capability to effectively control such robots with up to 25-DoF with nearly millimeter precision.
Because communication in SNN occurs through spikes, which are non-differential signals, the well-known backpropagation mechanism is not applicable for training these networks [27]. For this reason, some studies focus on obtaining an SNN by converting a pre-trained second-generation network. This practice was appealing as it allowed the advantages in terms of latency and efficiency offered by SNN, while the network was trained with established and efficient methodologies. The works presented in [28,29] employ this approach to perform real-time image classification more efficiently by running the network on specialized neuromorphic hardware, despite a slight decrease in accuracy during the conversion process.
When developing control systems based on SNN, some authors have argued that it has not been formally proven that these third-generation networks offer a substantial improvement in accuracy compared to their predecessors. However, their superiority in hardware implementations in terms of energy efficiency has been demonstrated. For some researchers, this reason alone makes the study of the applicability of SNN in control tasks of significant importance [20].
Indeed, the processing and information representation in SNN are fundamentally different from those employed by conventional neural networks. Consequently, trying to use the same learning mechanisms not only fails to harness the full potential of SNN but might also limit the results to being merely comparable to traditional networks rather than surpassing them. For instance, many gradient descent algorithms, such as mapping-based approaches, often fail to adequately account for the temporal aspect of calculations in SNN [30].
For decades, the true potential of these networks has not been realized in practice. However, recent advancements suggest a shift in this trend, with promising progress in neuromorphic hardware and methodologies [26]. New strategies have emerged to tackle the SNN training challenge, including evolutionary algorithms, Liquid State Machines (LSM), and the neuroscience-inspired STDP rule. These approaches unlock a realm of possibilities for deploying and refining SNN architectures by harnessing their intrinsic characteristics.
An example of the advantages that the use of SNN can provide is in torque control. Torque control is concerned with the dynamics of a robot, which encompasses its changes over time. As a result, the utilization of temporal encoding is well suited for capturing the progression of sensorimotor signals and for effectively controlling motion. This makes them an interesting solution for adaptive robot control [10].

3. SNN Fundamentals

In neural connections, each neuron connects with around 10,000 others, processes information continuously, and consumes minimal energy in comparison to the millions of existing neurons. It self-organizes and reconfigures over time. Replicating this behavior in an artificial system presents a highly complex endeavor. This complexity persists even when employing simplifications that only capture a fraction of the biological richness.
Over time, models and algorithms have been developed to, in some way, attempt to mimic this neuronal behavior. These developments have evolved in conjunction with advancements in computing resources and neuroscience discoveries.
The first generation of neural networks exclusively dealt with binary signals using a threshold function as their activation function. Given their proven capacity to effectively approximate analog functions with a high degree of accuracy, these networks have found extensive application as powerful tools for information processing.
In the second generation, nonlinear and continuous activation functions began to be used, allowing for the representation of analog values, typically scaled to a small numerical range. These neurons are more powerful than their predecessors because they can perform the same function but with significantly fewer neurons. Due to the demonstrated ability of these networks to approximate analog functions arbitrarily well, they have been widely used in machine learning applications.
The third generation enables the use of temporal information in communication and computation, bringing it closer to the functioning of biological neurons. Space–time information is captured by the SNN, encoded in spikes. The transmission of event timing with remarkable precision and accuracy is a notable attribute of these networks’ spikes. SNN offer distinct advantages and heightened biological plausibility, making them well suited for applications in robotics. They also contribute to the development of improved tools for analyzing brain functions [31].

3.1. Biological Neurons

In the human nervous system, around 86 billion neurons serve as fundamental operational units [32], transmitting electrochemical signals known as action potentials [33]. Neurons have a complex structure, shown in Figure 1a, including the soma (cell body) for information processing, dendrites for receiving impulses, and axons for transmitting impulses. The neuron’s membrane potential increases through dendritic input, leading to an action potential or spike [16]. Following a spike, there is a refractory period, preventing immediate firing. The generated spike travels through the axon to other neurons.
Action potentials arriving at axon terminals initiate the release of neurotransmitters into the synaptic cleft, binding to receptors on the postsynaptic neuron’s membrane and altering its potential. Neurotransmitters can have excitatory or inhibitory effects, facilitating information transmission. This neuronal connection, known as a synapse, is a fundamental and intricate element of neural function. This phenomenon is illustrated in Figure 1b.
While the exact mechanism by which biological neural systems respond to external stimuli remains uncertain, it is widely accepted that synapses either strengthen or weaken, adapting the response to stimuli based on the outcome they achieve. In this process of strengthening and weakening, synapses, in turn, induce the inhibition of certain neurons in favor of others, ensuring the response is as closely aligned to the received stimuli as possible.

3.2. Neuron Models

Due to the complexity of the dynamics in live neuronal cells resulting from the exchange of ions, their precise implementation in computationally intensive models is highly challenging. Nevertheless, the fundamental elements of their operation, such as dynamics, propagation, and plasticity, can be successfully modeled using mathematical descriptions.
Various models of neurons with different levels of complexity and biological plausibility have been proposed. Typically, the selection of a specific model involves a trade-off between these two elements, depending on the intended application. In [35], a review of spike neuron models is conducted, taking these aspects into account.
The Hodgkin–Huxley model [36,37] is a classical spiking neuron model that successfully captures the detailed dynamics of ion channels in real neurons. A comprehensive computational model of a biological neuron, like the Hodgkin–Huxley model, is impractical for simulation development due to its strong computational demands. To address this challenge, various more efficient models have been developed [38]. These models are simpler but can still capture the essence of the dynamics of real neurons. Some of these models, along with their key characteristics, are presented in Table 1.

3.3. Learning Rules

Approaches to training SNN can be broadly categorized into three groups: supervised learning, which involves techniques like gradient descent and spike backpropagation; unsupervised learning, employing local synaptic learning rules, such as spike-timing-dependent plasticity; and reinforcement learning, which relies on reward or error signals and uses reward-modulated plasticity for training [33].
The first supervised learning algorithm for spiking neurons, SpikeProp, utilizes backpropagation and gradient descent [43]. SpikeProp formulates an error function based on the variance between the intended and observed output spike times, constructed using the least squares error method [44]. Another approach is the SuperSpike algorithm [45], a nonlinear voltage-based learning rule employing gradient descent grounded in the membrane potential of neurons [46]. Tempotron, an online supervised learning rule, is designed for binary classification of multi-neuronal spike patterns. It relies on gradients and adapts synaptic weights to fire appropriately in response to spike patterns from various categories [47], representing a biologically plausible mechanism [48].
Biological systems constantly adapt, can learn from their mistakes, and can recognize new patterns they have never seen before. This type of learning is not naturally captured by backpropagation techniques. The backpropagation algorithm is effective when dealing with many inputs and outputs and when the process occurring between them is not well known. However, when there is a clear idea of what needs to be represented, then using that information can be useful and beneficial.
Prescribed Error Sensitivity (PES) is an online supervised learning algorithm tailored for real-time adaptive control. This approach improves a function by minimizing an externally provided error signal [49]. It is frequently used in conjunction with the NEF.
Liquid State Machine (LSM) is a widely used algorithm in SNN [30]. In this paradigm, a sparsely connected recurrent SNN functions as a dynamic “liquid” or “reservoir.” The reservoir, constructed stochastically, maintains input distinctiveness and transient memory. A readout component, often employing linear regression, interprets the reservoir’s output.
Evolutionary approaches have also been used to train or design SNN [30]. These approaches are advantageous due to their independence from differentiable activation functions and network configurations. They provide flexibility for adapting and modifying network features, but this flexibility may result in slower convergence compared to alternative training methods.
Spike-timing-dependent plasticity (STDP) is an unsupervised Hebbian learning method that adjusts the connection strength between neurons by considering the relative spike timing [34]. When the presynaptic spike arrives before a postsynaptic spike, the synaptic weight increases; this phenomenon is known as Long-Term Potentiation (LTP). Conversely, if the synaptic spike arrives after the postsynaptic spike, it results in Long-Term Depression (LTD), causing a decrease in synaptic weight.

4. Neural Engineering Framework

The Neural Engineering Framework (NEF) is a comprehensive methodology for developing large-scale, biologically plausible cognitive models [50]. It ensures a globally optimal approximation of dynamic equations, balancing high-level abstraction with preservation of fundamental behavioral aspects. Unlike frameworks for learning from input–output data, the NEF constructs a spiking neural network with a known transform through an optimization procedure [51].
The NEF translates neural activity into a vector space representation, implementing ordinary differential equations (ODE) [52]. The capability to implement ODE makes it suitable for control theory algorithms, with real-time adjustment of connection weights.
The NEF handles recurrent connections for complex dynamical systems and integrates error-based learning rules for online adaptation [53]. Biological constraints are incorporated by defining specific neuronal characteristics, allowing the evaluation of algorithm feasibility by comparing them with data at various levels.
The NEF consists of three fundamental principles: representation, transformation, and dynamics.
The principle of representation outlines how the NEF represents information using patterns of neuronal activity (in the form of time-varying vectors of real numbers) in a neural ensemble through the combination of nonlinear encoding and weighted linear decoding.
Following this principle, input signals are encoded into populations of neurons using specific tuning curves, which describe the activation of each neuron in response to the input signal. Each neuron i in the ensemble has an encoding vector (encoder) e i , interpreted as the preferred direction vector of the neuron, meaning the vector for which the neuron will fire most intensely.
Starting from the premise that the NEF establishes the input current to a neuron governed by a linear function of the represented value, neuronal activity for an input value x is calculated using Equation (1), where G is the neuronal nonlinearity (dependent on the chosen neuron model), α i is a gain parameter, and I i b i a s is the constant background current for the neuron.
a i = G i ( α i e i · x + I i b i a s )
Subsequently, applying the same principle, the spike activity generated in an ensemble is decoded to obtain the represented value. The estimation of this value relies on the postsynaptic activity generated after receiving a spike through the synapse. This activity is obtained by applying a linear filter to the spike train according to Equation (2), where a i is the activity of neuron i , h i is the spike response function (typically a decaying exponential weighted by a time constant), * is the convolution operator, and δ j is the spike train generated in response to the input signal x , with spike times indexed by j .
a i x = h i t δ j ( t t j ( x ) )
This enables the definition of the decoding operation as a linear sum of the neuronal activities of that population, according to Equation (3), where x ^ represents the estimation of the original input signal, N is the number of neurons in the population, and d i is the decoder. The least squares minimization is the most commonly used method to determine the set of decoding weights d i as it is necessary to minimize the difference between the represented value x and its estimation.
x ^ = i N a i ( x ) d i
The principle of transformation explains how operations and transformations between neural ensembles are implemented to construct the neural network and execute functions through their connections.
When a neuron produces a spike, it releases a neurotransmitter through the synapse, typically resulting in the transmission of a certain amount of current to the postsynaptic neuron. Many factors influence the amplitude of this current, and in the NEF, these factors are encapsulated in a scalar connection weight representing the strength of the connection between two neurons.
According to this principle, the connection weights between ensembles are computed as the product of the decoding weights for the function in the first ensemble d i , the encoding weights for the second ensemble e j , and some linear transformation, as defined in Equation (4). This principle also allows for the addition of values simply by introducing two inputs into the same group of neurons.
ω i j = f ( d i · e j )
The principle of dynamics establishes that neural representations can be viewed as state variables in a dynamic system (linear or nonlinear). These dynamic systems are built through recurrent connections, which can be computed using the second principle.

Nengo

Nengo Brain Maker is a powerful neural simulator based on the NEF, designed for building large-scale models of interconnected neural systems. Utilizing neural ensembles to represent information, Nengo acts as a “neural compiler,” translating high-level functional models into detailed low-level neural networks [54]. The Python package comprises essential objects like Ensemble, Node, Connection, Probe, and Network, while NengoGUI, an interactive web tool, aids in model visualization.
This tool provides versatility in defining parameters within neuron ensembles, allowing adjustments to firing rates, representational radius, and intercepts. It enables the incorporation of synaptic time constants, anatomical limitations, and alignment with neuroscientific evidence in the model design [55]. Nengo supports various biologically plausible learning rules for connection weight adjustments, with the PES rule being commonly used.
In comparison to other neural simulators, Nengo stands out for its theoretical framework with high biological realism. It played a crucial role in developing Spaun 2.0, the largest functional brain model, making Nengo truly unique in its class. Notably, Nengo boasts a speed advantage by storing only elements of the connection weight matrix, ensuring flexibility, scalability, and robustness.

5. Spiking PID Controller

In this section, we present the development of a PID (Proportional–Integral–Derivative) controller using spiking neurons under the NEF philosophy. The structure and operation of the controller are described, as well as the simulation environment used. An analysis of the controller’s behavior is performed using the graphs obtained in the trajectory tracking and performance indices.

5.1. Description of the System under Study

For this study, the subject of control is a 3-DoF robotic arm, characterized by the Denavit–Hartenberg parameters detailed in Table 2, with d 1 = 0.352   m , a 2 = 0.36   m , and a 3 = 0.445   m . The dynamic model of the robotic arm was derived employing the Lagrange–Euler formulation, as specified in Equation (5).
τ = M q q ¨ + C q , q ˙ + G q + F ( q ˙ )
where M ( q ) stands for the inertia matrix, C q , q ˙ signifies the matrix encompassing Coriolis terms, G q represents the vector denoting gravitational torques acting on the robot, and F ( q ˙ ) pertains to the vector of frictional forces. τ is utilized for the vector of generalized forces and q , q ˙ , and q ¨   represent the components of the position vector, velocity, and acceleration of the joints, respectively. The dynamic model of the 3-DoF manipulator is expressed by Equations (6) through (25) with the corresponding dynamical parameter values detailed in Table 3. The mathematical model of the robot was adopted from [56].
M ( q ) = m 11 m 12 m 13 m 21 m 22 m 23 m 31 m 32 m 33
m 11 = ( m 3 l c 3 2 + I y 3 ) c 3 2 + 2 a 2 m 3 l c 3 c 2 c 3 + ( m 2 l c 2 2 + a 2 2 m 3 I y 2 ) c 2 2
m 12 = m 13 = m 21 = m 31 = 0
m 22 = 2 a 2 m 3 l c 3 c 23 + m 3 l c 3 2 + m 2 l c 2 2 + a 2 2 m 3 + I z 3 + I z 2
m 23 = m 32 = a 2 m 3 l c 3 c 23 + m 3 l c 3 2 + I z 3
m 33 = m 3 l c 3 2 + I z 3
I y 2 = I z 2 = m 2 a 2 2 12
I y 3 = I z 3 = m 3 a 3 2 12
C ( q , q ˙ ) = 1 2 C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33
C 11 = { m 2 l c 2 2 s e n 2 θ 2     + m 3 a 2 2 s e n 2 θ 2 + l c 3 2 s e n 2 θ 2 + 2 θ 3 + a 2 l c 3 s e n 2 θ 2 + θ 3 } θ 2 ˙     { m 3 l c 3 ( l c 3 s e n 2 θ 2 + 2 θ 3 + 2 a 2 s e n 2 θ 2 + θ 3 + 2 a 2 s e n θ 3 ) } θ 3 ˙
C 12 = { m 2 l c 2 2 s e n 2 θ 2 + m 3 ( a 2 2 s e n 2 θ 2 + l c 3 2 s e n 2 θ 2 + 2 θ 3 + a 2 l c 3 s e n 2 θ 2 + θ 3 ) }   θ 1 ˙
C 13 = { m 3 l c 3 ( l c 3 s e n 2 θ 2 + 2 θ 3 + 2 a 2 s e n 2 θ 2 + θ 3 + 2 a 2 s e n θ 3 ) } θ 1 ˙
C 21 = { m 2 l c 2 2 s e n 2 θ 2 + m 3 ( a 2 2 s e n 2 θ 2 + l c 3 2 s e n 2 θ 2 + 2 θ 3 + a 2 l c 3 s e n 2 θ 2 + θ 3 ) }   θ 1 ˙
C 22 = m 3 a 2 l c 3 s 23 2 θ 3 ˙
C 23 = m 3 a 2 l c 3 s 23 2 ( θ 2 ˙ + θ 3 ˙ )
C 31 = { m 3 l c 3 ( l c 3 s e n 2 θ 2 + 2 θ 3 + 2 a 2 s e n 2 θ 2 + θ 3 + 2 a 2 s e n θ 3 ) } θ 1 ˙
C 32 = m 3 a 2 l c 3 s 23 2 θ 2 ˙
C 33 = 0
G q = 0 m 2 l c 2 c 2 + m 3 a 2 c 2 + l c 3 c 23 g m 3 l c 3 c 23 g
F q ˙ = F v ( q 1 ˙ ) F v ( q 2 ˙ ) F v ( q 3 ˙ ) = b 1 θ 1 ˙ b 2 θ 2 ˙ b 3 θ 3 ˙

5.2. Controller Design

Considering the challenge of controlling the joint position of a robotic arm using a bio-inspired system with SNN and the concepts established by the NEF, as well as in analyzing the potentialities and facilities offered by Nengo, a simulation environment is developed based on the diagram shown in Figure 2, where q d represents the desired position and q and d q correspond to the actual positions and velocities of the robot.
The MATLAB/Simulink tool was used to simulate the robot, recreating its kinematic and dynamic characteristics. Additionally, by utilizing MATLAB’s capability to execute Python code, the SNN controller developed using Nengo was integrated.
Given its simplicity and widespread adoption in the industry for robot control, the classic Proportional–Integral–Derivative (PID) controller stands out as the primary choice for implementing industrial control systems.
This controller generates a signal u t that guides the robot joint toward the desired position through continuous error reduction. It consists of three components: the current error value, the derivative of the error (which considers the projected future error value), and the integral of the error (which takes into account past error values). The control signal is calculated by combining these components according to Equation (26), where k p , k d and k i represent the proportional, derivative, and integral gains of the controller.
u t = k p e t + k d d e ( t ) d t + k i 0 t e ( t )
e t = q d q
This work proposes simulating PID control using spiking neurons for the calculation of its three components. The configuration of the spiking PID controller simulation in Nengo is illustrated in Figure 3.
Through Nengo nodes, the controller receives the values of q d , q , and d q as input and obtains d q d , corresponding to the desired velocity, by deriving q d through two connections (synapses) with different time scales. These values are encoded to neural activity in the ensembles P , D , and I according to Equation (1), where the error, its derivative, and its integral are obtained, respectively. Subsequently, using Equation (4), in the connection to the u ensemble, a linear transformation is applied to these values, multiplying them by their respective gains k p , k d , and k i . Finally, the u ensemble performs a weighted sum of the three branches of the controller according to Equation (26) and outputs the appropriate torque signal for the robot’s movement.
For each set of input values q d , q , and d q , the ensemble u generates a pattern of neuronal activity, meaning a spike pattern, as shown in the Figure 4. This signal is generated in a Python function executed from Simulink using the MATLAB coder.extrinsic. To save the signal generated in the ensemble and use it as a torque signal to control the robot, a Nengo object called Probe is employed (a probe is an object that collects data from the simulation).
The values represented by the neuronal activity of ensemble u are automatically calculated by Nengo according to the Equation (3) and saved in a nengo.Probe. For each input set, the created probe saves the values of u for 1 s in the t a u variable; these values are then averaged and returned to MATLAB to be applied as torque to the robot.
In this implementation, 300 LIF-type neurons were used for each ensemble, and weight optimization between neural representations was performed during model construction. The Nengo implementation of the spiking PID controller is illustrated in Figure 5.
All the neural operations and transformations for the development of the spiking PID controller are automatically implemented by the Nengo simulator. Hence, the primary advantage of this tool is that it allows one to describe, at a high level of abstraction, the function to be realized. Simply specifying the function you want to compute will result in an SNN that performs that function. Therefore, implementing this controller using these principles is relatively straightforward because the functions to be developed are well known.
A curious aspect of working with these tools is that, regardless of the structure you aim to develop with SNN, even if it involves mapping specific brain regions such as the motor cortex or the cerebellum, in the end, when the model is executed, what you have is a group of individual neurons simulated, each sending spikes to other neurons through synapses with specific weights, just like what occurs in the actual brain.

5.3. Results and Discussions

To evaluate the performance of the designed spiking PID controller, a circular trajectory was used with sinusoidal position and velocity profiles. The employed trajectory is defined by Equation (28) and is depicted in Figure 6.
x = 0.352 y = 0.15   sin ( 2 t ) z = 0.15   cos 2 t + 0.4
The trajectory tracking graphs, depicted in Figure 7 for the Cartesian trajectory and Figure 8 for the joint trajectory, illustrate the controller’s effectiveness. The robot consistently maintains accurate tracking of the reference trajectory, demonstrating smooth performance without overshoots or abrupt deviations.
The torque signal obtained from the controller is shown in Figure 9, where a smooth variation can be observed, without abrupt changes and without the presence of signal saturations.
Figure 10 and Figure 11 display the errors obtained during the simulation for the Cartesian and joint trajectories, offering a visual representation of the system’s performance in terms of tracking accuracy. It can be observed that, although further enhancements are possible, the errors obtained are minimal. Therefore, it can be stated that a satisfactory performance was achieved with this controller, reaching a high level of accuracy in reference tracking.
The spiking PID controller’s performance was evaluated through the following performance indices: residual mean square error (RMSE), assessing the accuracy of the control model, and the integral of time-weighted absolute error (ITAE), measuring the steady-state error in the initial response. Equations (29) and (30) detail the expressions for calculating these indices, providing a comprehensive assessment of the controller’s effectiveness.
R M S E = 1 n i = 1 n e i 2                     [ r a d ]
I T A E = 0 t e ( t ) d t                     [ r a d · s ]
For a quantitative performance analysis, the metrics obtained with the spiking PID controller were compared with those of a conventional PID controller using identical gains. Additionally, comparisons were made with a previously developed fuzzy controller and an ANFIS (adaptive neuro-fuzzy inference system) controller designed for this plant, as detailed in [56]. The evaluation encompassed both Cartesian and joint trajectories, and the corresponding metrics are presented in Table 4 and Table 5.
In general, the obtained indices are good and quite close to those achieved by more traditional controllers. This conclusion is supported by the graphical representation in Figure 12, allowing for a comparative analysis of the controllers’ response through the tracking curves of each robot joint.
The spiking PID controller demonstrates competitive performance with good accuracy and rapid error correction. Analysis of Table 4 and Table 5 shows that it outperforms the fuzzy controller by 5% in terms of the ITAE index.
The designed spiking PID controller shows improvements of 6% in the ITAE index and 30% in RMSE performance compared to the conventional PID controller, despite being designed with the same gains. This improvement is attributed to the spiking PID controller’s ability to maintain a more faithful representation of temporal patterns and system dynamics, leveraging the biological similarity of neurons and synaptic connections.
The approach used in Nengo enables advanced spatio–temporal processing, capturing complex relationships between input and output signals. The distributed representation of information in SNN facilitates efficient signal encoding, resulting in increased adaptability and precision in generating control signals.
While the spiking PID controller does not surpass the performance of the ANFIS controller, the result is considered satisfactory, considering that it is in an early stage of research. Additionally, this design incorporates some improvements inherent to SNN but retains certain intrinsic characteristics of the PID controller, limiting its adaptability to complex nonlinear systems, especially those with dynamic and variable behaviors. The manual tuning of PID gains, although crucial, may not be sufficient to address all operational conditions. Furthermore, the complexity of the robot and assigned tasks can significantly influence the relative performance of controllers. The ANFIS, being an adaptive neuro-fuzzy inference system, proves to be more adept at modeling and adapting to the specific nonlinearities of the system.

6. Conclusions

Spiking neural networks (SNN) play a significant role in advancing robotics, particularly in systems with higher degrees of freedom and industrial applications, showcasing a high potential to enhance autonomy and adaptability in dynamic environments. The utilization of the Neural Engineering Framework (NEF) and Nengo provides a powerful, robust, and adaptable approach for SNN-based controllers, as demonstrated by the implementation of a novel spiking Proportional–Integral–Derivative (PID) controller for a robotic arm. The competitive performance achieved by the developed controller is evident through trajectory tracking and error curve analyses, as well as performance indices such as RMSE and ITAE.
This research, representing the initial phase of an ongoing investigation, shows promising results for improved precision compared to conventional controllers. Future research aims to advance SNN-based controllers, focusing on adaptability, precision, and overall performance. The consideration of robotic systems with multiple degrees of freedom constitutes a key aspect of this strategic focus. Furthermore, potential hardware implementations of SNN controllers, utilizing FPGA or specialized neuromorphic hardware, are contemplated for future studies to demonstrate their energy-efficient characteristics and high performance in executing control models for real-time industrial applications.
In light of these findings, this study contributes to the growing body of knowledge on the applicability of SNN in robotics, paving the way for future advancements and practical implementations in industrial and complex robotic scenarios.

Author Contributions

Conceptualization, J.K. and D.M.; methodology, J.K. and D.M.; software, J.K. and D.M.; validation, J.K. and D.M.; formal analysis, J.K. and D.M.; investigation, J.K. and D.M.; resources, J.K. and D.M.; data curation, J.K. and D.M.; writing—original draft preparation, J.K. and D.M.; writing—review and editing, J.K. and D.M.; visualization, J.K. and D.M.; supervision, J.K. and C.U.; project administration, J.K. and C.U.; funding acquisition, J.K. and C.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All research data is included in the paper.

Acknowledgments

This work has been supported by Agencia Nacional de Investigación y Desarrollo ANID, Chile, through IDeA I + D ID21I10087 project and by ANID-Subdirección de Capital Humano/Doctorado Nacional/2021- 21210149. It also received support from the Vicerrectoría de Investigación, Innovación y Creación of the University of Santiago of Chile (USACH), Chile.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kalsoom, T.; Ramzan, N.; Ahmed, S.; Ur-Rehman, M. Advances in Sensor Technologies in the Era of Smart Factory and Industry 4.0. Sensors 2020, 20, 6783. [Google Scholar] [CrossRef]
  2. Indri, M.; Grau, A.; Ruderman, M. Guest Editorial Special Section on Recent Trends and Developments in Industry 4.0 Motivated Robotic Solutions. IEEE Trans. Ind. Inform. 2018, 14, 1677–1680. [Google Scholar] [CrossRef]
  3. Jones, A.; Gandhi, V.; Mahiddine, A.Y.; Huyck, C. Bridging Neuroscience and Robotics: Spiking Neural Networks in Action. Sensors 2023, 23, 8880. [Google Scholar] [CrossRef] [PubMed]
  4. Xu, K.; Wang, Z. The Design of a Neural Network-Based Adaptive Control Method for Robotic Arm Trajectory Tracking. Neural Comput. Appl. 2023, 35, 8785–8795. [Google Scholar] [CrossRef]
  5. Guo, J.; Nguyen, H.-T.; Liu, C.; Cheah, C.C. Convolutional Neural Network-Based Robot Control for an Eye-in-Hand Camera. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 4764–4775. [Google Scholar] [CrossRef]
  6. Xu, P. Neural Network Based Self-Tuning PID Controller. In Proceedings of the 2022 2nd International Conference on Algorithms, High Performance Computing and Artificial Intelligence (AHPCAI), Guangzhou, China, 21–23 October 2022; pp. 655–661. [Google Scholar]
  7. Guan, J.; Su, Y.; Su, L.; Sivaparthipan, C.B.; Muthu, B. Bio-Inspired Algorithms for Industrial Robot Control Using Deep Learning Methods. Sustain. Energy Technol. Assess. 2021, 47, 101473. [Google Scholar] [CrossRef]
  8. Tsapin, D.; Pitelinskiy, K.; Suvorov, S.; Osipov, A.; Pleshakova, E.; Gataullin, S. Machine Learning Methods for the Industrial Robotic Systems Security. J. Comput. Virol. Hacking Tech. 2023. [Google Scholar] [CrossRef]
  9. Osipov, A.; Pleshakova, E.; Bykov, A.; Kuzichkin, O.; Surzhik, D.; Suvorov, S.; Gataullin, S. Machine Learning Methods Based on Geophysical Monitoring Data in Low Time Delay Mode for Drilling Optimization. IEEE Access 2023, 11, 60349–60364. [Google Scholar] [CrossRef]
  10. Abadía, I.; Naveros, F.; Garrido, J.A.; Ros, E.; Luque, N.R. On Robot Compliance: A Cerebellar Control Approach. IEEE Trans. Cybern. 2021, 51, 2476–2489. [Google Scholar] [CrossRef]
  11. Ghazali, M.R.; Ahmad, M.A.; Jusof, M.F.M.; Ismail, R.M.T.R. A Data-Driven Neuroendocrine-PID Controller for Underactuated Systems Based on Safe Experimentation Dynamics. In Proceedings of the 2018 IEEE 14th International Colloquium on Signal Processing & Its Applications (CSPA), Penang, Malaysia, 9–10 March 2018; pp. 61–66. [Google Scholar]
  12. Bin Ghazali, M.R.; bin Ahmad, M.A.; bin Raja Ismail, R.M.T. Adaptive Safe Experimentation Dynamics for Data-Driven Neuroendocrine-PID Control of MIMO Systems. IETE J. Res. 2022, 68, 1611–1624. [Google Scholar] [CrossRef]
  13. Bajelani, M.; Ahmad Khalilpour, S.; Isaac Hosseini, M.; Taghirad, H.D.; Cardou, P. Brain Emotional Learning Based Intelligent Controller for a Cable-Driven Parallel Robot. In Proceedings of the 2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 17–19 November 2021; pp. 37–42. [Google Scholar]
  14. Arents, J.; Greitans, M. Smart Industrial Robot Control Trends, Challenges and Opportunities within Manufacturing. Appl. Sci. 2022, 12, 937. [Google Scholar] [CrossRef]
  15. Macdonald, F.L.A.; Lepora, N.F.; Conradt, J.; Ward-Cherrier, B. Neuromorphic Tactile Edge Orientation Classification in an Unsupervised Spiking Neural Network. Sensors 2022, 22, 6998. [Google Scholar] [CrossRef]
  16. Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks. Front. Neurorobot. 2018, 12, 35. [Google Scholar] [CrossRef]
  17. Pietrzak, P.; Szczęsny, S.; Huderek, D.; Przyborowski, Ł. Overview of Spiking Neural Network Learning Approaches and Their Computational Complexities. Sensors 2023, 23, 3037. [Google Scholar] [CrossRef] [PubMed]
  18. Juárez-Lora, A.; García-Sebastián, L.M.; Ponce-Ponce, V.H.; Rubio-Espino, E.; Molina-Lozano, H.; Sossa, H. Implementation of Kalman Filtering with Spiking Neural Networks. Sensors 2022, 22, 8845. [Google Scholar] [CrossRef] [PubMed]
  19. Morris, R.G.D.O. Hebb: The Organization of Behavior, Wiley: New York; 1949. Brain Res. Bull. 1999, 50, 437. [Google Scholar] [CrossRef]
  20. Chen, X.; Zhu, W.; Dai, Y.; Ren, Q. A Bio-Inspired Spiking Neural Network for Control of A 4-DoF Robotic Arm. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020; pp. 616–621. [Google Scholar]
  21. Zaidel, Y.; Shalumov, A.; Volinski, A.; Supic, L.; Ezra Tsur, E. Neuromorphic NEF-Based Inverse Kinematics and PID Control. Front. Neurorobot. 2021, 15, 631159. [Google Scholar] [CrossRef]
  22. Krakhmalev, O.; Krakhmalev, N.; Gataullin, S.; Makarenko, I.; Nikitin, P.; Serdechnyy, D.; Liang, K.; Korchagin, S. Mathematics Model for 6-DOF Joints Manipulation Robots. Mathematics 2021, 9, 2828. [Google Scholar] [CrossRef]
  23. Krakhmalev, O.; Korchagin, S.; Pleshakova, E.; Nikitin, P.; Tsibizova, O.; Sycheva, I.; Liang, K.; Serdechnyy, D.; Gataullin, S.; Krakhmalev, N. Parallel Computational Algorithm for Object-Oriented Modeling of Manipulation Robots. Mathematics 2021, 9, 2886. [Google Scholar] [CrossRef]
  24. Bellec, G.; Salaj, D.; Subramoney, A.; Legenstein, R.; Maass, W. Long Short-Term Memory and Learning-to-Learn in Networks of Spiking Neurons. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; Curran Associates, Inc.: Montreal, QC, Canada, 2018; Volume 31. [Google Scholar]
  25. Bellec, G.; Scherr, F.; Subramoney, A.; Hajek, E.; Salaj, D.; Legenstein, R.; Maass, W. A Solution to the Learning Dilemma for Recurrent Networks of Spiking Neurons. Nat. Commun. 2020, 11, 3625. [Google Scholar] [CrossRef]
  26. Traub, M.; Legenstein, R.; Otte, S. Many-Joint Robot Arm Control with Recurrent Spiking Neural Networks. arXiv 2021, arXiv:210404064. [Google Scholar]
  27. Lee, J.H.; Delbruck, T.; Pfeiffer, M. Training Deep Spiking Neural Networks Using Backpropagation. Front. Neurosci. 2016, 10, 508. [Google Scholar] [CrossRef] [PubMed]
  28. Massa, R.; Marchisio, A.; Martina, M.; Shafique, M. An Efficient Spiking Neural Network for Recognizing Gestures with a DVS Camera on the Loihi Neuromorphic Processor. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, Scotland, 19–24 July 2020; pp. 1–9. [Google Scholar]
  29. Hunsberger, E.; Eliasmith, C. Training Spiking Deep Networks for Neuromorphic Hardwar. arXiv 2016, arXiv:1611.05141. [Google Scholar]
  30. Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for Neuromorphic Computing Algorithms and Applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef]
  31. Aaron Zeglen, M. Amygdala Modeling with Context and Motivation Using Spiking Neural Networks for Robotics Applications. Master’s Thesis, Wright State University, Dayton, OH, USA, 2022. [Google Scholar]
  32. Yamazaki, K. Towards Sensorimotor Coupling of a Spiking Neural Network and Deep Reinforcement Learning for Robotics Application. Bachelor’s Thesis, University of Arkansas, Fayetteville, AR, USA, 2020. [Google Scholar]
  33. Yamazaki, K.; Vo-Ho, V.-K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, S.; Chen, X.; Huang, X.; Wei Zhang, D.; Zhou, P. Neuromorphic Engineering for Hardware Computational Acceleration and Biomimetic Perception Motion Integration. Adv. Intell. Syst. 2020, 2, 2000124. [Google Scholar] [CrossRef]
  35. Izhikevich, E.M. Which Model to Use for Cortical Spiking Neurons? IEEE Trans. Neural Netw. 2004, 15, 1063–1070. [Google Scholar] [CrossRef]
  36. Shama, F.; Haghiri, S.; Imani, M.A. FPGA Realization of Hodgkin-Huxley Neuronal Model. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1059–1068. [Google Scholar] [CrossRef]
  37. Giannari, A.G.; Astolfi, A. Model Design for Networks of Heterogeneous Hodgkin–Huxley Neurons. Neurocomputing 2022, 496, 147–157. [Google Scholar] [CrossRef]
  38. Dora, S.; Kasabov, N. Spiking Neural Networks for Computational Intelligence: An Overview. Big Data Cogn. Comput. 2021, 5, 67. [Google Scholar] [CrossRef]
  39. Lu, S.; Xu, F. Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural Networks and Its Mapping Relationship to Deep Neural Networks. Front. Neurosci. 2022, 16, 857513. [Google Scholar] [CrossRef] [PubMed]
  40. Kim, J.; Choi, Y.I.; Sohn, J.; Kim, S.-P.; Jung, S.J. Modeling Long-Term Spike Frequency Adaptation in SA-I Afferent Neurons Using an Izhikevich-Based Biological Neuron Model. Exp. Neurobiol. 2023, 32, 157–169. [Google Scholar] [CrossRef]
  41. Xiao, S.; Liu, W.; Guo, Y.; Yu, Z. Low-Cost Adaptive Exponential Integrate-and-Fire Neuron Using Stochastic Computing. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 942–950. [Google Scholar] [CrossRef]
  42. Carlu, M.; Chehab, O.; Dalla Porta, L.; Depannemaecker, D.; Héricé, C.; Jedynak, M.; Köksal Ersöz, E.; Muratore, P.; Souihel, S.; Capone, C.; et al. A Mean-Field Approach to the Dynamics of Networks of Complex Neurons, from Nonlinear Integrate-and-Fire to Hodgkin–Huxley Models. J. Neurophysiol. 2020, 123, 1042–1051. [Google Scholar] [CrossRef]
  43. Agebure, M.A.; Wumnaya, P.A.; Baagyere, E.Y. A Survey of Supervised Learning Models for Spiking Neural Network. Asian J. Res. Comput. Sci. 2021, 35–49. [Google Scholar] [CrossRef]
  44. Hong, C.; Wei, X.; Wang, J.; Deng, B.; Yu, H.; Che, Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible with Various Temporal Codes. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1285–1296. [Google Scholar] [CrossRef] [PubMed]
  45. Zenke, F.; Ganguli, S. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural Comput. 2018, 30, 1514–1541. [Google Scholar] [CrossRef]
  46. Fernández, J.G.; Hortal, E.; Mehrkanoon, S. Towards Biologically Plausible Learning in Neural Networks. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; pp. 1–8. [Google Scholar]
  47. Shi, C.; Wang, T.; He, J.; Zhang, J.; Liu, L.; Wu, N. DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 1581–1585. [Google Scholar] [CrossRef]
  48. Wang, S.; Li, C. A Supervised Learning Algorithm to Binary Classification Problem for Spiking Neural Networks. In Proceedings of the 2021 8th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS), Beijing, China, 10–12 December 2021; pp. 448–452. [Google Scholar]
  49. Hazan, A.; Ezra Tsur, E. Neuromorphic Neural Engineering Framework-Inspired Online Continuous Learning with Analog Circuitry. Appl. Sci. 2022, 12, 4528. [Google Scholar] [CrossRef]
  50. DeWolf, T.; Stewart, T.C.; Slotine, J.-J.; Eliasmith, C. A Spiking Neural Model of Adaptive Arm Control. Proc. R. Soc. B Biol. Sci. 2016, 283, 20162134. [Google Scholar] [CrossRef]
  51. Joseph, G.V.; Pakrashi, V. Spiking Neural Networks for Structural Health Monitoring. Sensors 2022, 22, 9245. [Google Scholar] [CrossRef] [PubMed]
  52. DeWolf, T. A Neural Model of the Motor Control System. Ph.D. Thesis, University of Waterloo, Waterloo, ON, Canada, 2015. [Google Scholar]
  53. Stewart, T.; Eliasmith, C. Compositionality and Biologically Plausible Models. In The Oxford Handbook of Compositionality; Oxford handbooks; Oxford University Press: New York, NY, USA, 2012; pp. 596–615. ISBN 9780199541072. [Google Scholar]
  54. Bekolay, T.; Bergstra, J.; Hunsberger, E.; Dewolf, T.; Stewart, T.; Rasmussen, D.; Choo, X.; Voelker, A.; Eliasmith, C. Nengo: A Python Tool for Building Large-Scale Functional Brain Models. Front. Neuroinforma. 2014, 7, 48. [Google Scholar] [CrossRef] [PubMed]
  55. Sharma, S.; Aubin, S.; Eliasmith, C. Large-Scale Cognitive Model Design Using the Nengo Neural Simulator. Biol. Inspired Cogn. Archit. 2016, 17, 86–100. [Google Scholar] [CrossRef]
  56. Kern, J.; Marrero, D.; Urrea, C. Fuzzy Control Strategies Development for a 3-DoF Robotic Manipulator in Trajectory Tracking. Processes 2023, 11, 3267. [Google Scholar] [CrossRef]
Figure 1. Biological neurons and synapses. (a) Structure and components of neurons. (b) Schematic diagram of a synapse [34].
Figure 1. Biological neurons and synapses. (a) Structure and components of neurons. (b) Schematic diagram of a synapse [34].
Sensors 24 00491 g001
Figure 2. Diagram of the simulation environment used.
Figure 2. Diagram of the simulation environment used.
Sensors 24 00491 g002
Figure 3. Diagram of the spiking PID controller.
Figure 3. Diagram of the spiking PID controller.
Sensors 24 00491 g003
Figure 4. Spike activity of ensemble u during 1 second of simulation.
Figure 4. Spike activity of ensemble u during 1 second of simulation.
Sensors 24 00491 g004
Figure 5. Spiking PID implementation in Nengo.
Figure 5. Spiking PID implementation in Nengo.
Sensors 24 00491 g005
Figure 6. Test trajectory used to evaluate the performance of the spiking PID controller.
Figure 6. Test trajectory used to evaluate the performance of the spiking PID controller.
Sensors 24 00491 g006
Figure 7. Simulated and desired Cartesian trajectory using spiking PID controller.
Figure 7. Simulated and desired Cartesian trajectory using spiking PID controller.
Sensors 24 00491 g007
Figure 8. Simulated and desired joint trajectory using spiking PID controller.
Figure 8. Simulated and desired joint trajectory using spiking PID controller.
Sensors 24 00491 g008
Figure 9. Torque signal applied to the robot using spiking PID controller.
Figure 9. Torque signal applied to the robot using spiking PID controller.
Sensors 24 00491 g009
Figure 10. Cartesian trajectory error with the spiking PID controller.
Figure 10. Cartesian trajectory error with the spiking PID controller.
Sensors 24 00491 g010
Figure 11. Joint trajectory error with the spiking PID controller.
Figure 11. Joint trajectory error with the spiking PID controller.
Sensors 24 00491 g011
Figure 12. Joint trajectory performance comparison of the controllers: (a) performance comparison for joint 1; (b) performance comparison for joint 2; (c) performance comparison for joint 3.
Figure 12. Joint trajectory performance comparison of the controllers: (a) performance comparison for joint 1; (b) performance comparison for joint 2; (c) performance comparison for joint 3.
Sensors 24 00491 g012aSensors 24 00491 g012b
Table 1. Neuron models.
Table 1. Neuron models.
ModelFormulationCharacteristics
Integrate and Fire (IF)[32]
  • Simulates neuron’s behavior rather than its structure.
  • Simplified model of a neuron with fire threshold.
  • Widely recognized and employed.
Leaky Integrate and Fire (LIF)[32,33,39]
  • Adds a “leakage” term to the basic IF model.
  • Accounts for the ion movement across the membrane during non-equilibrium states.
  • Minimal computational requirements and simple structure.
Izhikevich[32,33,40]
  • Combines biological plausibility and computational efficiency.
  • Adjusting the model’s parameters allows obtaining diverse firing patterns observed in different brain neurons.
  • Viewed as an adaptive quadratic LIF model.
Adaptive Exponential LIF (AdEx)[33,41,42]
  • Includes threshold adaptation based on the membrane potential increase.
  • Incorporates an exponential voltage dependency.
  • Captures the adaptability of neurons.
Table 2. D–H parameters of the 3-DoF manipulator.
Table 2. D–H parameters of the 3-DoF manipulator.
Joint a i α i d i θ i
10 π 2 d 1 θ 1
2 a 2 00 θ 2
3 a 3 00 θ 3
Table 3. Parameters considered for the manipulator.
Table 3. Parameters considered for the manipulator.
L i n k 1 L i n k 2 L i n k 3 U n i t s
m 0.50.30.3 [ k g ]
l 0.3520.360.445 [ m ]
l c 0.1760.180.22 [ m ]
F v 0.150.250.25 [ N · m · s / r a d ]
Table 4. Performance indices for Cartesian trajectory.
Table 4. Performance indices for Cartesian trajectory.
ITAERMSE
PIDFuzzyANFISSpiking PIDPIDFuzzyANFISSpiking PID
x 0.15770.46730.19230.34490.00640.00870.00390.0067
y 0.30540.12110.08250.12530.00680.00300.00230.0033
z 0.45051.170.10520.41580.01530.02170.00230.0089
Table 5. Performance indices for joint trajectory.
Table 5. Performance indices for joint trajectory.
ITAERMSE
PIDFuzzyANFISSpiking PIDPIDFuzzyANFISSpiking PID
q 1 0.89090.13670.12130.28960.01890.00540.00470.0087
q 2 1.5692.5730.35170.68930.04120.04820.00790.0155
q 3 0.65130.75830.44710.86310.01420.01740.00890.0169
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marrero, D.; Kern, J.; Urrea, C. A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks. Sensors 2024, 24, 491. https://doi.org/10.3390/s24020491

AMA Style

Marrero D, Kern J, Urrea C. A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks. Sensors. 2024; 24(2):491. https://doi.org/10.3390/s24020491

Chicago/Turabian Style

Marrero, Dailin, John Kern, and Claudio Urrea. 2024. "A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks" Sensors 24, no. 2: 491. https://doi.org/10.3390/s24020491

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop