Elsevier

Pattern Recognition Letters

Volume 36, 15 January 2014, Pages 204-212
Pattern Recognition Letters

Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm

https://doi.org/10.1016/j.patrec.2013.05.019Get rights and content

Highlights

  • Output of biomimetic sensorimotor cortex model sent to robotic arm via UDP.

  • Robot arm joint positions sent back via UDP to external PC running model.

  • Communication between external PC and robot arm happens in real time.

  • Compared visualization of model’s virtual arm with robotic arm.

  • Sets stage for future closed-loop BMI with monkey based on realistic spiking model.

Abstract

Brain–machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices.

Introduction

Understanding the human brain has become one of the great challenges of this century (Lytton et al., 2012). It has the potential to significantly benefit human health by providing tools to treat neural disease and repair neural damage. Greater understanding will also revolutionize our interactions with the world through development of BMIs for motor control that enable patients with central nervous system injuries or lost limbs to regain autonomy. Neuroscience research is also beginning to make progress towards understanding how neurons encode information, and how the complex dynamical interactions within and among neuronal networks lead to learning, and produce sensorimotor coordination and motor control. With this knowledge, we can begin to use computers both to decode brain information and to autonomously produce brain-like signals to control prosthetic devices, or to control a real arm (Carmena et al., 2003).

A key problem with simulation technology is that one can not be sure whether something critical, known or unknown, has been left out. This concern suggests an approach whereby we embed biomimetic neuronal networks in the actual physical world in which the real brain is embedded. We thereby both take advantage of the benefits of physicality and ensure that our systems are not omitting some factor that is required for actual functionality. Physicality provides a form of memory, since the arm is actually located in space and will later be in the same location unless operated on by external forces such as gravity. It also provides inertia, which is memory for the 1st derivative of position. However, these mnemonic attributes, features under what set of circumstances, can be limitations (bugs) when the system is presented with other operational requirements.

The physical world also plays an essential role in learning and behavior (Almassy et al., 1998, Edelman, 2006, Krichmar and Edelman, 2005, Lungarella and Sporns, 2006, Webb, 2000). For example, the selection hypothesis emerges from embodiment: the environment can select neuronal dynamics that are suitable for producing desired behaviors through the agency of a limb or other effector (Edelman, 1987). Embodiment can be used to make predictions for how changes will occur during the perception–action–reward–cycle (Mahmoudi and Sanchez, 2011), as the embedding of a system in the environment provides adaptation opportunities. Further levels of embedding are achieved by the development of hybrid systems that use co-adapting symbiotic relations between brain (the prosthetics-user) and artificial agents (biomimetic systems and smart prosthetic devices) and among these artificial agents. The co-adaptions occur as the system moves towards common goals, e.g., reaching for an object (Sanchez et al., 2012). In this context, one can readily see the advantages of biomimesis: the biomimetic system is a partial model of the natural agent’s brain processes that facilitates the transfer of neural encodings without explicit decoding, providing representations based on those of the brain and then serving as intermediary between brain and devices. We note that co-adaptation is further extended to include the environment itself, as when we redesign objects so as to make them easier to manipulate with artificial limbs, or easier to navigate for individuals in wheelchairs.

Biomimetic brain models (BMMs) are able to replicate many experimental paradigms, such as sensorimotor learning experiments (Chadderdon et al., 2012, Neymotin et al., 2013) or cellular microstimulation (Kerr et al., 2012). They are also able to accurately reproduce physiological properties observed in vivo, including firing rates, stimulus-induced modulations and local field potentials (Neymotin et al., 2011). The system presented here will be extended into a framework to link real-time electrophysiological recordings with supercomputers that run the BMMs, and thence to control of prosthetic devices. This co-adaptive brain–machine interface extends the classical BMI paradigm by engaging both subject and computer model in synergistic learning.

Major challenges in assembling a system that incorporates a BMM into the neural decoder/smart prosthetic data stream are (1) getting the components to communicate with one another, and (2) achieving this communication in real-time. There are a plethora of different systems for acquiring electrophysiological data from animal or human subjects (Buzsáki, 2004), a number of potential neuronal simulators (biomimetic and state-space) that might be assembled together to provide the brain model (Brette et al., 2007), and a wide array of potential prosthetic links with different physical characteristics that require different control strategies. Software components within this data stream may run on, for example, machines running MATLAB under Windows, or on machines running Python or C++ code under Linux. A networking framework needs to be developed that can not only permit messages to be passed between these disparate environments and hardware platforms, but to do so in a timely fashion so that the prosthetic-using subject does not perceive a disruptive lag in the performance of a prosthetic limb.

In this paper, we address the initial problems of developing the larger real-time co-adaptive BMI system. We begin with the design of inter-system communications between the BMM and the prosthetic limb, leading towards a real-time interface between a model of sensorimotor cortex and a robotic arm (Barrett Technology’s Whole Arm Manipulator – WAM (Barrett Technology Inc., 2012a)). We provide the NEURON-based BMM and the robotic arm, each running on a separate Linux-based machine. Our implementation of the real-time interface then provides a Python application that forwards data from the BMM to the WAM arm and passes robot arm position information back to a display window.

Section snippets

Methods

We used a BMM previously developed within the NEURON simulation environment using an extended synaptic functionality (Lytton et al., 2008). The original model drove a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model to drive mirroring motion of a Barrett Technology WAM robotic arm in real time through a UDP interface. Additionally, the position information from the robot arm was fed back into

Experimental set-up

The BMM ran on an external PC (laptop with 2.66 GHz Intel Core i7) and sent joint angles via wireless LAN to the WAM internal PC controlling the WAM arm. The resultant WAM joint angles were fed back to the BMM via wireless LAN and forwarded to the WAM virtual virtualization, running under V-REP on the same external PC (Fig. 4). The joint angles are passed through the BMM software so that they can later be used as inputs for the learning loop. The left window in Fig. 4 laptop screen shows the

Discussion

We have developed a real-time interface between a BMM of sensorimotor cortex and a robotic device in the real-world. We used our model to demonstrate the feasibility of using realistic neuronal network models to control devices in real-time. We evaluated the system using four different reaches each lasting greater than 1 min. We demonstrated that the robot arm could follow these BMM trajectories in real time.

The trajectories generated by the BMM exhibited significant fluctuations as a result of

Acknowledgements

Research supported by: DARPA grant N66001-10-C-2008. The authors thank Barrett Technology Support Team for WAM arm support; and Larry Eberle (SUNY Downstate) for Neurosim lab support.

References (34)

  • N. Carnevale et al.

    The NEURON Book

    (2006)
  • Chadderdon, G., 2009. A neurocomputational model of the functional role of dopamine in stimulus-response task learning...
  • G.L. Chadderdon et al.

    Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex

    PLoS One

    (2012)
  • J. Digiovanna et al.

    Cyber-workstation for computational neuroscience

    Front. Neuroeng.

    (2010)
  • G. Edelman

    Neural Darwinism: The Theory of Neuronal Group Selection

    (1987)
  • G. Edelman

    The embodiment of mind

    Daedalus

    (2006)
  • Freese, M., Singh, S., Ozaki, F., Matsuhira, N., 2010. Virtual Robot Experimentation Platform V-REP: A Versatile 3D...
  • Cited by (14)

    View all citing articles on Scopus
    View full text