Next Article in Journal / Special Issue
Controlling a Below-the-Elbow Prosthetic Arm Using the Infinity Foot Controller
Previous Article in Journal
The Management of Bone Defects in Revision Knee Arthroplasty: The Role of Porous Metal Cones and 3D-Printed Cones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

A Perspective on Prosthetic Hands Control: From the Brain to the Hand

Centro Protesi Inail, Vigorso di Budrio, 40054 Bologna, Italy
*
Author to whom correspondence should be addressed.
Prosthesis 2023, 5(4), 1184-1205; https://doi.org/10.3390/prosthesis5040083
Submission received: 7 August 2023 / Revised: 9 November 2023 / Accepted: 13 November 2023 / Published: 16 November 2023
(This article belongs to the Special Issue Innovations in the Control and Assessment of Prosthetic Arms)

Abstract

:
The human hand is a complex and versatile organ that enables humans to interact with the environment, communicate, create, and use tools. The control of the hand by the brain is a crucial aspect of human cognition and behaviour, but also a challenging problem for both neuroscience and engineering. The aim of this study is to review the current state of the art in hand and grasp control from a neuroscientific perspective, focusing on the brain mechanisms that underlie sensory integration for hand control and the engineering implications for developing artificial hands that can mimic and interface with the human brain. The brain controls the hand by processing and integrating sensory information from vision, proprioception, and touch, using different neural pathways. The user’s intention can be obtained to control the artificial hand by using different interfaces, such as electromyography, electroneurography, and electroencephalography. This and other sensory information can be exploited by different learning mechanisms that can help the user adapt to changes in sensory inputs or outputs, such as reinforcement learning, motor adaptation, and internal models. This work summarizes the main findings and challenges of each aspect of hand and grasp control research and highlights the gaps and limitations of the current approaches. In the last part, some open questions and future directions for hand and grasp control research are suggested by emphasizing the need for a neuroscientific approach that can bridge the gap between the brain and the hand.

1. Introduction

The human hand is a complex and versatile organ, capable of performing a wide range of movements and manipulations with precision and dexterity [1,2,3]. The hand enables humans to interact with the environment, communicate, create, and use tools [4]. However, how the human brain controls the hand and its grasp is not fully understood and poses several challenges for both neuroscience and engineering.
From a neuroscience perspective, understanding the brain mechanisms of hand control can shed light on the neural basis of motor skills, sensorimotor integration, action perception, and tool use [5,6]. These are fundamental processes for human evolution and culture, as well as for the development and rehabilitation of motor functions. From an engineering perspective, understanding the brain mechanisms of hand control can inspire the design and control of robotic and prosthetic hands [7,8]. These devices aim to restore or enhance the functionality and appearance of the natural hand, enabling users to perform daily activities and interact with objects. For people who use prosthesis, the device is a way to recover what they have lost, i.e., the hand.
Current prosthetic hands and their control strategies often fall short of the natural hand [2]. They are usually less complex and dexterous than natural hands [9], and their control strategies are often unnatural and unintuitive [10], requiring high cognitive effort from the user and resulting in poor naturalness and fluidity of movements. Furthermore, artificial hands lack effective sensory feedback [11], which is crucial for hand perception and control.

2. Prosthetic Hands Control

The control of prosthetic hands is a challenging problem that involves the integration of different components, such as sensors, actuators, signal processing algorithms, and feedback mechanisms [8,12]. The control strategies can be classified into two main categories: feedforward and feedback control. Feedforward control relies on the user’s intention to generate commands for the prosthetic hand, while feedback control provides information to the user about the state and outcome of the hand’s actions [13].
Feedforward control can be achieved using different input signals, e.g., electromyographic (EMG) signals, electroencephalographic (EEG) signals, eye tracking, or residual limb motion.
EMG signals are the most used input signals for prosthetic hands and can be recorded from surface electrodes attached to the skin or from implanted electrodes inserted into the muscles or nerves [12,14]. EMG signals can be processed by using various methods as pattern recognition, regression, or proportional control, to decode the user’s intention [12]. Pattern recognition methods use machine learning algorithms to classify different hand gestures based on EMG features, while regression methods use mathematical models to estimate continuous variables, such as joint angles or forces, based on EMG signals [15]. Proportional control methods use the amplitude or frequency of EMG signals to modulate the speed or force of the prosthetic hand [16,17].
EEG signals can be acquired from scalp electrodes or from implanted electrodes in the brain and processed by using methods, e.g., event-related potentials (ERPs), steady-state visual evoked potentials (SSVEPs), or motor imagery (MI) to decode the user’s intention [18]. ERPs are transient changes in EEG signals that respond to specific stimuli, like visual cues or tactile feedback. SSVEPs are oscillatory changes in EEG signals that occur when the user is exposed to flickering lights at different frequencies [19]. MI is a mental process that involves imagining performing a specific movement without executing it. These methods can be used to select different hand gestures or activate different degrees of freedom (DoFs) of the prosthetic hand [18].
Eye tracking reflects the user’s visual attention and gaze direction [20], recorded from cameras or glasses that capture the movement of the eyes and pupils and processed by using methods such as gaze fixation, gaze switching, or gaze estimation to decode the user’s intention [12,21]. Gaze fixation is used when the user is looking at a specific target for a certain amount of time and activating a corresponding hand gesture [22,23]. Gaze switching is a method adopted when the user is shifting their gaze from one target to another and activating a corresponding transition between hand gestures [24]. Gaze estimation involves estimating the 3D position and orientation of the user’s gaze and mapping it to the 3D position and orientation of the prosthetic hand [21].
Residual limb motion is another type of input signal involving the user’s natural movement patterns and preferences [25]. Residual limb motion can be obtained from sensors attached to the stump or from wearable devices [26]. In particular, gloves or bracelets capture the movement of other body parts that are processed by using methods like direct mapping, inverse kinematics, or synergy-based control to decode the user’s intention and generate commands for the prosthetic hand [27]. Direct mapping maps each DoF of the residual limb to a corresponding one of the prosthetic hand. Inverse kinematics calculates the joint angles of the prosthetic hand that are required to achieve a desired end-effector position and orientation based on residual limb motion. Synergy-based control reduces the dimensionality of the control space by using predefined combinations of joint angles that correspond to natural hand postures [28].
Feedback control can be achieved by using different output signals: tactile feedback, proprioceptive feedback, auditory feedback, or visual feedback.
Tactile feedback provides information about the contact between the prosthetic hand and objects [8,29,30], such as location, pressure, texture, shape, or temperature, and can be delivered by using different actuators, e.g., vibrotactile motors [31], electrotactile stimulators, pneumatic devices, shape memory alloys (SMAs) [32], or soft robotics. Vibrotactile motors are devices that produce vibrations perceived by the skin. Electrotactile stimulators generate electrical pulses that can be sensed by the nerves [33]. Pneumatic devices are devices that use air pressure to create deformations or displacements that can be felt by the skin. SMAs are materials that change their shape or length when heated by an electric current [32]. Soft robotics are devices that create deformations or displacements that can be perceived by the skin [34].
Proprioceptive feedback provides information about the position and movement of the prosthetic hand: joint angles, velocities, or accelerations [35]. Proprioceptive feedback can be delivered by using different methods, such as sensory substitution, sensory augmentation, or sensory restoration [35,36,37]. Sensory substitution converts proprioceptive information into another sensory modality, e.g., tactile, auditory, or visual [35]. Sensory augmentation enhances proprioceptive information with additional cues: force, torque, or stiffness [36]. Sensory restoration restores proprioceptive information to the original sensory modality by using neural interfaces, such as peripheral nerve stimulation or cortical stimulation [35].
Auditory feedback is a type of output signal that uses different sounds, like speech, music, tones, clicks, or noises, to provide information about the state and outcome of the prosthetic hand’s actions, such as success, failure, error, or warning [38].
Visual feedback is a type of output signal that provides information about the appearance and performance of the prosthetic hand through different types of displays, like monitors, projectors, glasses, or virtual reality. Visual and auditory feedback can be used to complement or replace other types of feedback [12].
The advantages and disadvantages of the different control strategies for prosthetic hands depend on various factors: the type and quality of the input and output signals, the complexity and robustness of the signal processing algorithms, the usability and acceptability of the user interface devices, and the personal preferences and needs of the users. Some general advantages and disadvantages are summarized in Table 1.
Prosthetic hand control is a complex and multifaceted problem that depends on various factors, such as the following:
  • The type and quality of the input and output signals, which determine the information content and fidelity of the control system;
  • The complexity and robustness of the signal processing algorithms, which affect the accuracy and reliability of the control commands;
  • The usability and acceptability of the user interface devices, which influence the comfort and satisfaction of the users.
The most common and widely used control method for prosthetic hands is based on superficial EMG (sEMG) signals, which can provide a natural and intuitive way of controlling the prosthetic hand but are also noisy, variable, and limited in information content.
Different approaches for improving the control of prosthetic hands and for overcoming the limitations of sEMG-based control have been investigated, using hybrid sensor modalities, applying motor learning principles, employing hierarchical or biomimetic control methods, implementing invasive or noninvasive sensory feedback mechanisms, developing shape memory alloy actuators, and studying eye tracking metrics. However, these approaches are still in their early stages of development and require further research and evaluation to ensure their safety, reliability, durability, usability, and functionality.
Moreover, these approaches have limitations due to the lack of a neurophysiological perspective on the human hand.
More research and development are needed to design and control prosthetic hands that can better mimic human hand functions and provide a satisfactory user experience for different users and tasks. A potential solution is to design and implement novel and effective control strategies for artificial hands that are based on the neural mechanisms of hand function in the human brain [3]. The human brain uses a series of hierarchical and parallel processes to control the hand, integrating information from different sensory, cognitive, and emotional levels [39,40], and can adapt to changes in the environment and the body, learning new skills and optimizing performance [41,42]. These features make the human brain a source of inspiration for developing natural, intuitive, and efficient control strategies for artificial hands.
This article aims to provide a comprehensive and insightful overview of the current state of the art and future directions of the human brain mechanisms involved in hand and grasp control and their engineering implications for the design and control of artificial hands. The main brain regions and pathways that are responsible for the generation, execution, and regulation of hand movements, as well as for the integration of sensory, cognitive, and emotional information, will be described. How these brain mechanisms can inspire the development of novel and effective control architectures and interfaces for artificial hands, which can mimic the natural and efficient coordination and regulation of hand movements by the brain, will also be discussed. Some suggestions and recommendations for future research and development on brain-inspired hand and grasp control will be concluded. The first step will be to investigate the functioning and user satisfaction of commercial prosthetic hands.

3. Current Commercial Prosthetic Hands and Their Limitations

Commercial prosthetic hands are classified into three main types: passive cosmetic hands, body-powered hands (also known as kinematic hands), and active myoelectric hands [43]. Passive cosmetic hands resemble the natural hand in shape and colour but have no movement capabilities [44]. Kinematic hands enable the users to grasp objects by controlling opening and closing through cables and jigs linked to the hand’s contralateral side [44]. Due to their high robustness and durability, kinematic hands are usually adopted to hard work which requires lifting and manipulating heavy objects [10]. Active myoelectric hands are powered by electric motors and can perform different movements and grasps, depending on the number and configuration of the fingers and joints [45]. The user controls active myoelectric hands by using EMG signals from the residual muscles of the limb, which are detected by electrodes attached to the skin [46] or implanted in the muscles [47]. Despite current myoelectric hands allowing the users to partially recover the hand’s simpler functions [48], several limitations that affect their performance and acceptance by users persist and are related to the control methods and the sensory restitution. The control methods are usually very simple and are based on proportional or on/off control [49], which are unnatural and not intuitive, as they require the user to generate arbitrary muscle contractions that are not related to the desired movement or grasp. Moreover, these control methods can only control one DoF at a time, which limits the versatility and dexterity of the prosthetic hand [50]. Sensory restitution is essential for grasping and manipulating objects [51] and provides information about the state of the hand and the object, such as contact, pressure [52], slippage [53], and temperature [54]. Moreover, sensory feedback also enables error correction and adaptation, as well as emotional and aesthetic satisfaction. Commercial prosthetic hands do not provide any sensory feedback to the user, except for some visual and auditory cues from the device itself. This makes the user’s control of the artificial hand difficult during the phases of grip force and load force, which are required in order to grasp and manipulate objects without slipping [55,56] or dropping [57]. Due to these limitations, commercial prosthetic hands have low functionality and usability and high user abandonment rates. Indeed, only 56% of upper limb prosthesis users reported wearing their prosthesis every day, while 23% reported never wearing their prosthesis [43]. The main reasons for dissatisfaction and abandonment were poor fit, comfort, appearance, weight, functionality, reliability, noise, maintenance, cost, and social stigma [43]. Understanding the needs and preferences of amputees is crucial for designing and controlling prosthetic hands that can meet their expectations and improve their quality of life. Therefore, amputees should be involved in the development process from the beginning and not only at the end.

4. Needs of Upper Limb Prosthesis Users

Upper limb amputees face many challenges and difficulties in their daily lives, as they lose the ability to perform various tasks that require the use of the hand. These tasks include grasping, manipulating, writing, or gesturing, among others [58,59]. Moreover, they may experience psychological and social problems regarding low self-esteem, depression, stigma, or isolation [58,60]. Therefore, prosthetic devices should address their different needs and expectations in terms of functionality, comfort, appearance, and social acceptance [58,61].
The main needs of upper limb prosthesis users are comfort, reliability, functionality, sensory feedback, and affordability [58]. Comfort is related to the physical and psychological aspects of wearing and using a prosthesis, including weight, fit, ease of donning and doffing, and cosmetic appearance [58,62]. Reliability is associated with the technical and mechanical aspects of the prosthesis, covering robustness, durability, maintenance, and repair [58,63]. Functionality is concerned with the performance and versatility of the prosthesis, involving adaptability, intuitiveness, and control [48,58]. Sensory feedback is about the information provided by the prosthesis to the user, encompassing touch, force, or position [53,58]. Affordability is linked to the economic and social aspects of acquiring and using a prosthesis, comprising cost, insurance, or reimbursement [58,64].
More user-centred design and evaluation of prosthetic devices is needed to address the diverse and dynamic needs of upper limb prosthesis users [58,61]. A new design paradigm in the field of prosthetic hand control could be a possible way to achieve this, which does not merely aim to replicate the natural hand and its brain control, but rather leverages the synergies and simplifications that the brain employs to control the hand [59,65]. Firstly, understanding the physical and physiological principles underlying the motion control of the brain is essential.

5. The Brain as Inspiration

5.1. Brain Regions for Hand Control

The human brain consists of several regions that are involved in hand control, each with specific roles and functions. These regions can be broadly divided into two categories: cortical (Figure 1a) and subcortical (Figure 1b) regions. Cortical regions are the outermost layers of the brain, composed of grey matter. They include the primary motor cortex (M1), the premotor cortex (PMC), the supplementary motor area (SMA), the parietal cortex (PC), and the temporal cortex (TC). These regions are responsible for the generation, execution, and regulation of hand movements, as well as for the integration of sensory, cognitive, and emotional information [5,6,66].
The M1, located in the precentral gyrus of the frontal lobe, is the primary source of motor commands to the spinal cord and muscles and has two ways of controlling movement. The lateral group, on one side, controls the limbs, hands, and fingers. The medial group, on the other side, controls the trunk and proximal muscles [67].
The PMC, situated in the frontal lobe anterior to M1, plays a role in planning and preparing hand movements. It is especially involved in those movements that are guided by sensory cues or learned by imitation and helps with motor learning and adaptation [40,41].
The SMA, located on the medial surface of the frontal lobe, participates in planning and initiating hand movements. It is particularly involved in those movements that are self-generated or memory-based and coordinates bimanual movements and action sequences [42,68].
The PC, situated behind the central sulcus in the posterior part of the brain, processes somatosensory information from the hand, e.g., touch, pressure, temperature [54], pain, and proprioception. It also combines multisensory information from vision, audition, and cognition and maps it onto motor representations of the hand [4,39].
The TC, located below the lateral fissure in the lateral part of the brain, processes visual information from the hand. It handles shape, size, orientation, and motion, identifying objects and tools by their appearance and function and linking them with hand actions [69].
Subcortical regions are located below the cortex, composed of white matter. They include the cerebellum and the basal ganglia. These regions are responsible for modulating and refining hand movements, as well as for learning and habit formation [70,71].
The cerebellum, situated below the occipital lobe at the back of the brain, coordinates and fine-tunes hand movements. It ensures accuracy and smoothness and also corrects and detects errors in movement execution and stores motor memories [24,72].
The basal ganglia, a group of nuclei deep within the brain near the thalamus, select and initiate hand movements. They suppress unwanted movements and switch between different movement patterns. They also encode reward and reinforcement signals and help with motor learning and habit formation [71,73].

5.2. Brain Pathways for Hand Control

The human brain uses several pathways for transmitting and receiving information between the brain regions and the spinal cord and muscles involved in hand control. These pathways can be broadly divided into two categories: descending and ascending pathways [69].
Descending pathways are the ones that carry motor commands from the brain to the spinal cord and muscles, initiating and modulating hand movements. They include the corticospinal tract, the corticobulbar tract, and the cortico–cortical connections [66].
The corticospinal tract, the main pathway for voluntary hand control, originates from the M1, the PMC, and the SMA and descends through the brainstem. It then branches into two: the lateral corticospinal tract, which crosses to the opposite side of the spinal cord and controls the hand and finger muscles; and the anterior corticospinal tract, which stays on the same side of the spinal cord and controls the arm and shoulder muscles [67,74].
The corticobulbar tract, a pathway for cranial nerve control, originates from the M1, PMC, and SMA and descends through the brainstem, ending in various brainstem nuclei that control cranial nerves related to hand movements. These include the trigeminal nerve (for jaw movements), the facial nerve (for facial expressions), and the hypoglossal nerve (for tongue movements) [75,76].
The cortico–cortical connections, pathways for interhemispheric and intrahemispheric communication, originate from various cortical regions related to hand control. These include the M1, PMC, SMA, PC, and TC. They connect them with other cortical regions through white matter fibers. They comprise the corpus callosum, which links homologous regions of both hemispheres, and the superior longitudinal fasciculus, which links frontal and parietal regions within each hemisphere [77,78].
Ascending pathways are the ones that carry sensory feedback from the spinal cord and muscles to the brain, informing and adjusting hand movements. They include the dorsal column–medial lemniscus pathway, the spinothalamic pathway, and the spinocerebellar pathway [70,71].
Carrying touch, pressure, vibration, and proprioception information from the hand and fingers to the brain, the dorsal column–medial lemniscus pathway is the main pathway for somatosensory feedback. This pathway begins from the dorsal root ganglia of the spinal cord and ascends through the dorsal columns of the spinal cord. In the medulla, it connects and crosses to the opposite side, forming the medial lemniscus and, from there, reaches the thalamus, where it connects again and sends signals to the primary somatosensory cortex and the parietal cortex [5,79].
The spinothalamic pathway, a pathway for pain and temperature [54] feedback, carries pain, temperature [54], itch, and crude touch information from the hand and fingers to the brain. This pathway begins from the dorsal root ganglia of the spinal cord and enters the dorsal horn of the spinal cord. In the dorsal horn, it connects and crosses to the opposite side of the spinal cord, forming the spinothalamic tract. From there, it goes up through the brainstem and reaches the thalamus, where it connects again and sends signals to the primary somatosensory cortex and the parietal cortex [80,81].
Carrying muscle length, tension, and joint position information from the hand and fingers to the cerebellum, the spinocerebellar pathway is a pathway for proprioceptive feedback. This pathway begins from the dorsal root ganglia of the spinal cord and enters the dorsal horn of the spinal cord. It then branches into two: the posterior spinocerebellar tract, which goes up on the same side of the spinal cord and reaches the cerebellum through the inferior cerebellar peduncle, and the anterior spinocerebellar tract, which crosses to the opposite side of the spinal cord, goes up through the brainstem, and crosses again to reach the cerebellum through the superior cerebellar peduncle [80,82].
Besides the descending and ascending pathways, the brain also uses other mechanisms to integrate the sensory information from the hand and fingers with other modalities, such as vision, audition, and vestibular systems. These mechanisms involve the interaction of multiple brain regions, such as the parietal cortex, the temporal cortex, the frontal cortex, and the cerebellum, as well as the subcortical structures, such as the thalamus, the basal ganglia, and the brainstem. Sensory integration enables the brain to form a coherent representation of the hand and its environment and to coordinate hand movements with other sensory cues [83].

5.3. Brain Mechanisms for Sensory Integration

The human brain integrates sensory information from different modalities, such as vision, audition, and touch, to form coherent and accurate representations of the external world and the body [84]. Sensory integration is essential for hand control, as it allows the brain to perceive the properties and location of objects, to guide and adjust hand movements, and to monitor the outcomes of actions [69]. However, sensory integration is not a fixed or automatic process. The brain can flexibly modulate sensory integration depending on the context, the task, and the reliability of sensory inputs [85,86].
Cortical regions for sensory integration are mainly located in the PC, which receives and processes somatosensory, visual, and auditory information from the hand and fingers [5,6]. The PC consists of several subregions that perform different functions for sensory integration. These include the primary somatosensory cortex (S1), which maps the tactile sensations from the hand; the secondary somatosensory cortex (S2), which integrates tactile information with other modalities; the posterior parietal cortex (PPC), which combines sensory information with motor commands and spatial representations; and the intraparietal sulcus (IPS), which encodes object features and hand postures [79,87]. These parietal regions interact with other cortical regions involved in hand control, such as the M1, PMC, SMA, and TC [66,69].
Subcortical regions for sensory integration are mainly located in the thalamus, which acts as a relay station between the sensory receptors and the cortex, and in the cerebellum, which coordinates and fine-tunes hand movements based on sensory feedback [67,70]. The thalamus consists of several nuclei that receive and transmit sensory information from different modalities to specific cortical regions. These include the ventral posterior nucleus (VPN), which relays somatosensory information to the S1; the lateral geniculate nucleus (LGN), which relays visual information to the V1; the medial geniculate nucleus (MGN), which relays auditory information to the A1; and the pulvinar nucleus (PUL), which integrates multisensory information and modulates attention [80,81]. The cerebellum consists of several lobules that receive and send sensory information from different sources to various cortical regions. These include the anterior lobe (lobules I–V), which receives proprioceptive information from the spinal cord and projects to the M1; the posterior lobe (lobules VI–X), which receives visual, auditory, and vestibular information from the brainstem and projects to the PPC and the SMA; and the flocculonodular lobe (lobule X), which receives vestibular information from the inner ear and projects to the vestibular nuclei [82,88].
The human brain controls the hand by performing a series of processes that are organized in levels and run in parallel, combining information from different sources of sensation, cognition, and emotion. However, for people who have lost their hands due to injury or disease, these mechanisms are disrupted or impaired, leading to difficulties in performing daily activities and reducing their quality of life. To restore hand function and sensation, prosthetic hands have been developed that can mimic the appearance and movements of the natural hand. However, current prosthetic hands are still limited in terms of sensory feedback, control complexity, and user acceptance. Therefore, new approaches are needed to improve the performance and usability of prosthetic hands, by taking into account the hierarchical control strategies that the brain uses for natural hand control [10]. Which prosthetic hand control strategies are most similar to the model of the human brain?

6. Hierarchical Control Strategies for Prosthetic Hands

Hierarchical control strategies are based on the idea that the brain controls the hand by using different levels of abstraction and representation, from high-level goals and intentions to low-level motor commands and signals [48,89,90,91,92], and can provide a framework for mapping between different levels of control and representation [93], as well as for integrating multiple sources of information, such as visual, proprioceptive, tactile, and emotional cues [94,95].
Hierarchical control strategies for prosthetic hands can be divided into three main levels: task level, gesture level, and action level (Figure 2) [48,49,96].
The task level refers to the desired outcome or goal of a movement, such as reaching for an object or typing a word [97,98]. This can be influenced by social and contextual cues that affect the user’s motivation and emotions [99,100]. To control the task level, the user employs high-level inputs (e.g., EMG, neural signals [101], or EEG signals) that reflect their intention or volition [47,102,103].
The gesture level selects a suitable motor program or strategy for achieving the goal by choosing a grasp type and/or a finger sequence [90,104,105] and can be influenced by visual and proprioceptive cues that provide information about the object and the hand, such as shape, size, weight, texture, and orientation [106,107]. Machine learning algorithms control the gesture level by learning from the user’s behavior, preferences, and feedback, such as reinforcement learning, deep learning, or neural networks [108,109].
The action level generates motor signals to move the prosthetic hand’s motors and joints [110,111] and can be influenced by tactile cues that provide feedback about state hand objects, such as contact, pressure, slippage [112], and temperature [51,57,113]. Sensors on the device or in the environment control the action level by reflecting motor commands or signals, such as force, position, velocity, and acceleration [114,115].
Using hierarchical control strategies for prosthetic hands can provide greater flexibility and adaptability in controlling the hand’s movements [90]. This is achieved by adjusting to different task contexts and sensory feedback. For instance, the prosthetic hand can choose from various grasp types depending on the shape, size, and weight of the object being manipulated [116,117]. Additionally, the prosthetic hand can adjust its motor commands by using its own sensors or feedback from the user’s residual limb to correct issues like slipping or misalignment when grasping objects [118,119].
However, hierarchical control strategies for prosthetic hands also have some limitations and challenges. One of them is the high dimensionality and complexity of the hand’s movements, which require a large number of inputs and outputs to control each degree of freedom of the prosthetic hand. Another one is the variability and uncertainty of sensory feedback, which can affect the accuracy and reliability of prosthetic hand control [85,86]. To overcome these limitations and challenges, the use of control synergies for prosthetic hands has been proposed.

7. Control Synergies

The brain controls the hand by coordinating many DoFs and integrating sensory feedback [120]. Instead of controlling each DoF separately, the brain exploits synergies, which are patterns of correlated movements or postures that simplify the control space [121]. Synergies enhance the robustness and adaptability of the hand [65] and enable the design of more functional, usable, and acceptable devices that satisfy the user’s needs and expectations [59,122].
The human brain manages hand movements by coordinating various DoFs and incorporating sensory feedback [120]. Rather than controlling each DoF individually, the brain utilizes synergies—patterns of correlated movements or postures—to simplify the control space [121]. These synergies not only enhance the hand’s resilience and flexibility [65] but also assist in creating more practical, user-friendly devices that meet the user’s requirements and expectations [59,122].
The development of artificial hands that mimic natural and intuitive control has been inspired by the concept of human hand synergies [59]. Proposed strategies are typically divided into two categories: software-based and hardware-based.
Algorithms are used in software strategies to map the signals from the user to the movements or configurations of the artificial hand [58]. These algorithms can be based on statistical methods, machine learning, optimization, or bio-inspired models [26]. Software strategies are capable of adapting to any artificial hand with any number of DoFs and can be personalized to meet the user’s preferences and requirements [123,124]. However, they necessitate considerable computational power and a dependable signal acquisition system [62].
Physical mechanisms, such as tendons, springs, gears, or other mechanical components, can rigidly or compliantly couple some of the DoFs of an artificial hand [63,125]. Artificial hands benefit from hardware strategies that reduce the need for actuators and sensors. These strategies also enable passive adaptation to various object shapes and sizes [126]. Using this approach may restrict the hand’s flexibility and fine motor skills, which may not match the user’s desired movements or expectations [53].
Achieving optimal performance in software and hardware strategies requires finding a balance between complexity and performance. This can be a challenging task, as both have their own advantages and disadvantages [48]. However, current solutions have found success through a hybrid approach that leverages the strengths of both software and hardware [127].
The way humans use their hands is not set in stone, but rather adaptable and influenced by different situations [128]. Compared to artificial hands, there are differences in movement, force, and sensory responses [129]. As a result, artificial hands need to be able to learn and adjust their movements based on experience and specific tasks [130,131].

8. Artificial Intelligence (AI): Mimicking the Human Brain to Support Prosthetic Hand Control

The control of prosthetic hands based on sEMG signals is a challenging task that requires a lot of computational power and complex processing. Different methods based on deep, continuous, and incremental learning have been proposed to improve the accuracy and robustness of sEMG-based gesture recognition and control for prosthetic hands [132]. These methods aim to mimic the human brain’s ability to control hand movements in a natural and intuitive way, by learning from sEMG signals and adapting to the user’s preferences and needs [133]. Some examples of how these methods have been applied to the control of prosthetic hands are the following:
  • Deep learning models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, generative adversarial networks (GANs), vision transformers, and temporal convolutional networks (TCNs), can automatically extract features and accurately classify sEMG signals for various hand movements and grasp types, eliminating the need for manual feature engineering [134,135]. Furthermore, these models can leverage multimodal signals, such as sEMG, EE, and nerve interface, to achieve more natural and intuitive control of prosthetic limbs [136].
  • Continuous learning techniques, such as adversarial and sparsity prior learning, update the sEMG-based control system while preserving previous knowledge [137]. To address the variability and nonstationarity of sEMG signals caused by fatigue, sweat, electrode displacement, and arm position, certain techniques can be employed to assist the sEMG-based control system. These techniques can further adapt to user preferences and feedback, refining the control model [138].
  • Incremental learning techniques like online, transfer, and lifelong learning have been utilized to enhance the sEMG-based control system with new knowledge and skills, without the need to retrain the entire model from the beginning [139]. By utilizing techniques that adapt successful models from previous subjects, the training time and effort required to control an upper limb prosthesis can be reduced. Additionally, these techniques can allow the user to learn new gestures or movements with the prosthetic hand without interfering with pre-existing ones [138,139].
The use of AI for improving prosthetic hand control is an innovative and promising approach that can enhance the functionality and acceptance of sEMG-based control systems. With the aid of deep, continual, and incremental learning methods, these control systems can learn from sEMG signals and adapt to the user’s preferences and needs, much like the human brain. This approach can also overcome challenges and limitations of conventional sEMG-based control systems, such as sEMG signal variability and nonstationarity, data interference and loss, the scalability and generalization of learning models, and the need for user training and adaptation.

9. Discussion, Opportunities, and Open Issues

This work summarizes the brain mechanisms for hand control and engineering implications for developing artificial hands that can interface with the human brain. The significance of creating prosthetic hands that can offer sensory feedback, creating control interfaces that can translate sensory information into neural signals, utilizing control algorithms that can integrate sensory information from various sources, and including learning mechanisms that can adjust to modifications in sensory inputs or outputs, has been emphasized.
However, current control strategies for prosthetic hands still face several limitations and challenges, compared to the natural hand and its control by the brain.
Natural hands receive a diverse range of somatosensory information, such as texture, shape, weight, pain, or proprioception, that helps them to perceive and manipulate objects [8,84]. However, prosthetic hands have limited sensory capabilities that are not realistic. They only rely on a few sensors that measure contact force, position, or temperature, which are insufficient to capture the richness and complexity of natural touch [113].
The delivery of sensory information from prosthetic hands to users through control interfaces is usually noninvasive but less reliable than natural pathways. Invasive techniques requiring implanting electrodes or wires into the peripheral nerves or brain can cause tissue damage, inflammation, infection, or rejection [9,140], and the quality and stability of neural signals can degrade over time due to scar formation, electrode displacement, or noise [52,141]. This may affect the safety, comfort, functionality, and durability of a prosthetic hand [122].
The brain processes sensory information from different modalities and sources in a complex and flexible way, depending on the context, task, and reliability of sensory inputs [58,59]. The brain also uses hierarchical and parallel networks that process sensory information at different levels, from low-level signals to high-level representations [142,143]. However, control algorithms for prosthetic hands are often simplistic and rigid. They use predefined rules or models that combine sensory inputs with motor outputs or spatial representations but do not account for the variability and uncertainty of natural touch [144,145]. This may affect the accuracy, adaptability, naturalness, and intuitiveness of a prosthetic hand.
Controlling prosthetic hands using sEMG signals poses several challenges due to the signals’ noisy, variable, and complex nature. Sophisticated techniques are required to translate these signals into reliable and meaningful commands. The main obstacles associated with using sEMG signals for prosthetic hand control include the following:
  • sEMG signals are nonstationary and can vary due to physiological and environmental factors like fatigue, sweat, electrode displacement, and arm position. Such changes can impact the signals’ amplitude, frequency, and morphology, thereby affecting the control system’s accuracy and stability [146,147]. Therefore, adaptive and robust methods are needed to cope with these changes.
  • sEMG signals are susceptible to external interference, such as electromagnetic fields, power lines, and other biological signals. These interference signals may introduce noise and artefacts and degrade their quality and signal-to-noise ratio [96,138]. These interference signals may require filtering and denoising techniques to remove them.
  • sEMG signals have limited resolution and provide incomplete information about hand movements. The signals produced by sEMG primarily indicate muscle activity and contraction, but they may not always correspond accurately to finger movements and kinematics [45,134]. Additionally, sEMG signals may also have a low spatial and temporal resolution, especially for fine and dexterous movements like grasping and manipulating objects [10,148]. These limitations may affect the functionality and performance of the control system and require feature extraction and enhancement techniques to improve them.
  • The control system’s performance can be improved by using feature extraction and enhancement techniques on sEMG signals. However, the number and quality of sEMG sensors and electrodes may be limited, which can affect their ability to adequately cover and sample the signals. The sensors and electrodes may also be constrained by size, shape, placement, and contact, which can further impact spatial and temporal coverage and sampling [96,138]. The quality of sensors and electrodes can also affect signal acquisition and transmission [45]. These limitations may affect the overall usability and efficiency of the system.
  • Standardization and validation of sEMG datasets and protocols are lacking, resulting in potential incomparability and irreproducibility between different studies and systems. Variations in data collection, preprocessing, segmentation, labelling, and evaluation, as well as user characteristics such as age, gender, health condition, and amputation level, can all impact the consistency and generalization of sEMG datasets and protocols [148].
  • Implementing sEMG-based control systems in real time and online may be limited by the computational complexity and power consumption of machine learning and deep learning algorithms. Achieving good performance and accuracy with these algorithms may require high computational resources and training data [138]. The algorithms’ complexity and power consumption can affect the control system’s speed and efficiency, necessitating optimization and compression techniques to reduce them [148].
  • Using sEMG-based control systems for prosthetic hands requires user training and adaptation, which can be challenging and time-consuming. The user needs to learn the gestures and motions that the control system can recognize and adjust the control parameters and feedback mechanisms to suit their preferences and needs [10,138]. These factors can affect the usability and comfort of the control system and demand user-friendly and personalized techniques to facilitate them [48,93].
  • The ethical and social implications of using AI for prosthetic hand control may raise some concerns and challenges, such as privacy, security, accountability, and responsibility [149]. To control the prosthetic hand with AI, the system needs to collect and process sensitive and personal data from the users, such as their sEMG signals, hand movements, and preferences [150,151]. These data help the system to learn the users’ intentions and behaviors and to provide them with suitable feedback and control options [152]. However, this also means that the system may make decisions and actions that may affect the users’ health, safety, and well-being, such as moving the prosthetic hand in unexpected or harmful ways [153]. The ethical and social implications of using AI for prosthetic hand control may affect the trust and acceptance of the control system and require ethical and social guidelines and regulations to address them [154].
Different methods based on deep, continuous, and incremental learning have been proposed to improve the accuracy and robustness of the control of prosthetic hands. These methods aim to mimic the human brain’s ability to control hand movements in a natural and intuitive way, by learning from sEMG signals and adapting to the user’s preferences and needs. Both the human brain and deep, continuous, and incremental learning can process information, learn from data, and adapt to changing environments and tasks.
The human brain and deep learning models have some similarities in their structure, function, and mechanisms, as they are both composed of networks of neurons that can transmit and process signals [109,155]. However, there are several differences between the neurons in the human brain and the ones in deep learning models. Indeed, the neurons in the human brain are biological cells that communicate through electrical and chemical synapses, which are complex and dynamic structures that can adjust signal transmission and plasticity [156,157]. On the other hand, the neurons in deep learning models are artificial units that perform mathematical operations and activations, which are simpler and more deterministic functions that can approximate signal processing and learning. Although both systems have multiple layers and connections of neurons, the human brain has many more neurons and synapses than deep learning models, allowing it to handle more complex and diverse tasks [10]. Additionally, the human brain can use more sophisticated and flexible learning rules than deep learning models and can balance between stability and plasticity. The human brain controls hand movements by sending signals from the motor cortex to the spinal cord and the muscles, which are coordinated by the sensory feedback and the cerebellum [147,158]. Conversely, deep learning models control prosthetic hand movements by sending commands from the classifier to the actuator, based on sEMG signals and the control algorithm [19,159,160].
The human brain and continuous learning techniques share a common ability to update their knowledge and skills based on new data while retaining previous ones [158,161]. However, they also have differences and limitations in their methods and mechanisms. The human brain can use neuroplasticity, which allows it to modify its synaptic connections and neural pathways in response to experience and learning [10,162]. This helps the brain adapt to the variability and nonstationarity of sEMG signals caused by factors like fatigue, sweat, electrode displacement, and arm position [138,162]. Continuous learning techniques use methods to reduce the interference with and forgetting of old data. They update the control model based on new data and feedback [137,163]. However, both face the trade-off between stability and plasticity, which can lead to catastrophic forgetting or interference [164,165]. The human brain uses mechanisms like sleep, rehearsal, and consolidation to prevent these issues [159,162], while continuous learning techniques use methods like regularization, replay, and distillation to mitigate them [160,161,166].
The human brain and incremental learning techniques share some similarities in their ability to add new information or abilities without needing to start from scratch [162]. However, they also have differences and limitations in their methods. The human brain can use memory consolidation to store and retrieve new information by transferring it from short-term to long-term memory [10]. This allows it to learn new gestures or motions without affecting existing ones by forming new neural associations and representations [137,139]. Incremental learning techniques can use online learning, transfer learning, and lifelong learning methods to update and expand learning models with new data or tasks [10]. This enables users to learn new gestures or motions with a prosthetic hand by adding new classes or features to the learning models [162]. However, both the human brain and incremental learning techniques face challenges in scalability and generalization, meaning they may struggle to handle a large number of classes or tasks or transfer knowledge and skills to new scenarios [10]. The human brain can use chunking, abstraction, and analogy strategies to overcome limitations, while incremental learning techniques need to use curriculum learning, metalearning, and multitask learning methods to address scalability and generalization challenges [160,166].
These methods can enhance the functionality and acceptability of the control systems for prosthetic devices and ultimately improve the quality of life of amputees. Before these methods can be widely adopted and implemented in real-world scenarios, their challenges and limitations must be addressed.
The human brain and deep learning models have different levels of complexity and scalability. The human brain has about 86 billion neurons and 100 trillion synapses [167], while deep learning models have much fewer artificial neurons and connections [168]. The human brain can perform more complex and diverse tasks than deep learning models but is limited by energy consumption, speed, and storage capacity [169]. The human brain cannot process large numbers of data as fast and efficiently as deep learning models, but these models require more computational resources and training data to achieve good performance [109].
The human brain and continuous learning techniques have different degrees of stability and plasticity. The human brain can maintain existing knowledge and skills while also adapting to new situations and challenges, balancing stability and plasticity [170]. Often, techniques for continuous learning must strike a balance between stability and plasticity. This means they may face two potential problems: catastrophic forgetting (losing previous knowledge and skills) or interference (degrading new knowledge and skills) [162]. The brain can prevent forgetting and interference through sleep, rehearsal, and consolidation [159], while continuous learning techniques use methods such as regularization, replay, and distillation to mitigate them [171].
The human brain and incremental learning techniques have different challenges in terms of scalability and generalization. The human brain has a limited capacity and lifespan but can learn new things incrementally [161]. Incremental learning techniques allow for the gradual acquisition of new information and skills without having to relearn everything from the beginning [172]. However, these techniques may face obstacles in scalability and generalization. This means that they may struggle to handle a large number of tasks or classes and may not be able to transfer learned knowledge and skills to new situations or domains [162]. The human brain has the ability to overcome limitations in capacity and lifespan by using strategies like chunking, abstraction, and analogy [173]. On the other hand, incremental learning techniques must utilize methods like curriculum learning, metalearning, and multitask learning to tackle challenges related to scalability and generalization [139].
Despite proposed solutions, there are still unsolved issues with prosthetic hands that need attention:
  • To design artificial hands that can emulate the sensory capabilities of the natural hand, using novel materials and technologies that can sense and generate different types of somatosensory stimuli [128,130].
  • To develop control interfaces that can deliver sensory information from artificial hands to the user’s nervous system, using biocompatible and biodegradable materials and devices that can interface with the nervous tissue without causing damage or rejection [128,130].
  • To implement control algorithms that can integrate sensory information from different modalities and sources, using machine learning and artificial intelligence methods that can learn from data and feedback and adapt to context and task [174,175].
  • To incorporate learning mechanisms that can adapt to changes in sensory inputs or outputs, using computational neuroscience models and theories that can capture the plasticity and adaptability of the brain, and incorporate reward and reinforcement signals [70,71].
In conclusion, developing a control strategy for artificial hands inspired by the human brain is a promising but challenging research direction. Since January 2022, various control strategies have been developed to address the issue of achieving human-like control of prosthetic hands (Figure 3). However, the problem remains unsolved.
By replicating human hand synergies with prosthetic hands, engineers could create more natural, intuitive, and adaptable devices that can meet the needs and expectations of people who need or use artificial hands. Although progress has been made, there is still a need for better prosthetic hands that can closely interact with the human nervous system. More sophisticated control algorithms are required to accommodate diverse sensory inputs and outputs. Moreover, more efficient learning mechanisms are necessary in order to promote motor learning and habit formation. By doing so, researchers could develop artificial hands that can truly mimic and interface with the human brain and improve the quality of life of people who need or use artificial hands.

Author Contributions

C.G. designed the paper, analyzed the literature, and wrote the paper; E.G. supervised the writing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the European Union-funded project IntelliMan (grant no. 101070136).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Napier, J.R. The prehensile movements of the human hand. J. Bone Jt. Surgery. Br. Vol. 1956, 38, 902–913. [Google Scholar] [CrossRef]
  2. Kyberd, P. Making Hands: A History of Prosthetic Arms; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar]
  3. Balasubramanian, R.; Santos, V.J. The Human Hand as an Inspiration for Robot Hand Development; Springer: Berlin/Heidelberg, Germany, 2014; Volume 95. [Google Scholar]
  4. Maravita, A.; Iriki, A. Tools for the body (schema). Trends Cogn. Sci. 2004, 8, 79–86. [Google Scholar] [CrossRef]
  5. Culham, J.C.; Valyear, K.F. Human parietal cortex in action. Curr. Opin. Neurobiol. 2006, 16, 205–212. [Google Scholar] [CrossRef]
  6. Rizzolatti, G.; Sinigaglia, C. The functional role of the parieto-frontal mirror circuit: Interpretations and misinterpretations. Nat. Rev. Neurosci. 2010, 11, 264–274. [Google Scholar] [CrossRef]
  7. Zollo, L.; Rossini, L.; Bravi, M.; Magrone, G.; Sterzi, S.; Guglielmelli, E. Quantitative evaluation of upper-limb motor control in robot-aided rehabilitation. Med. Biol. Eng. Comput. 2011, 49, 1131–1144. [Google Scholar] [CrossRef]
  8. Ciancio, A.L.; Cordella, F.; Barone, R.; Romeo, R.A.; Dellacasa Bellingegni, A.; Sacchetti, R.; Davalli, A.; Di Pino, G.; Ranieri, F.; Di Lazzaro, V.; et al. Control of prosthetic hands via the peripheral nervous system. Front. Neurosci. 2016, 10, 116. [Google Scholar] [CrossRef]
  9. Zollo, L.; Roccella, S.; Guglielmelli, E.; Carrozza, M.C.; Dario, P. Biomechatronic design and control of an anthropomorphic artificial hand for prosthetic and robotic applications. IEEE/ASME Trans. Mechatronics 2007, 12, 418–429. [Google Scholar] [CrossRef]
  10. Castellini, C.; Artemiadis, P.; Wininger, M.; Ajoudani, A.; Alimusaj, M.; Bicchi, A.; Caputo, B.; Craelius, W.; Dosen, S.; Englehart, K.; et al. Proceedings of the first workshop on peripheral machine interfaces: Going beyond traditional surface electromyography. Front. Neurorobotics 2014, 8, 22. [Google Scholar] [CrossRef]
  11. Jensen, W. Natural sensory feedback for phantom limb pain modulation and therapy. In Converging Clinical and Engineering Research on Neurorehabilitation II: Proceedings of the 3rd International Conference on NeuroRehabilitation (ICNR2016), Segovia, Spain, 18–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2017; pp. 719–723. [Google Scholar]
  12. Chen, Z.; Min, H.; Wang, D.; Xia, Z.; Sun, F.; Fang, B. A Review of Myoelectric Control for Prosthetic Hand Manipulation. Biomimetics 2023, 8, 328. [Google Scholar] [CrossRef]
  13. Triwiyanto, T.; Rahmawati, T.; Pawana, I.P.A.; Lamidi, L.; Hamzah, T.; Pudji, A.; Mak’ruf, M.R.; Luthfiyah, S. State-of-the-art method in prosthetic hand design: A review. J. Biomimetics Biomater. Biomed. Eng. 2021, 50, 15–24. [Google Scholar] [CrossRef]
  14. Yang, B.; Jiang, L.; Ge, C.; Cheng, M.; Zhang, J. Control of myoelectric prosthetic hand with a novel proximity-tactile sensor. Sci. China Technol. Sci. 2022, 65, 1513–1523. [Google Scholar] [CrossRef]
  15. Dellacasa Bellingegni, A.; Gruppioni, E.; Colazzo, G.; Davalli, A.; Sacchetti, R.; Guglielmelli, W.; Zollo, L. NLR, MLP, SVM, and LDA: A comparative analysis on EMG data from people with trans-radial amputation. J. Neuroeng. Rehabil. 2017, 14, 82. [Google Scholar] [CrossRef]
  16. Park, S.H.; Lee, S.P. EMG pattern recognition based on artificial intelligence techniques. IEEE Trans. Rehabil. Eng. 1998, 6, 400–405. [Google Scholar] [CrossRef]
  17. Yang, D.; Zhao, J.; Gu, Y.; Jiang, L.; Liu, H. EMG pattern recognition and grasping force estimation: Improvement to the myocontrol of multi-DOF prosthetic hands. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 516–521. [Google Scholar]
  18. Maat, B.; Smit, G.; Plettenburg, D.; Breedveld, P. Passive prosthetic hands and tools: A literature review. Prosthetics Orthot. Int. 2018, 42, 66–74. [Google Scholar] [CrossRef]
  19. Chen, K.; Zhang, Y.; Zhang, Z.; Yang, Y.; Ye, H. Trans humeral prosthesis based on sEMG and SSVEP-EEG signals. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 2665–2670. [Google Scholar]
  20. Saetta, G.; Cognolato, M.; Atzori, M.; Faccio, D.; Giacomino, K.; Mittaz Hager, A.G.; Tiengo, C.; Bassetto, F.; Müller, H.; Brugger, P. Gaze, behavioral, and clinical data for phantom limbs after hand amputation from 15 amputees and 29 controls. Sci. Data 2020, 7, 60. [Google Scholar] [CrossRef]
  21. Cheng, K.Y.; Rehani, M.; Hebert, J.S. A scoping review of eye tracking metrics used to assess visuomotor behaviours of upper limb prosthesis users. J. Neuroeng. Rehabil. 2023, 20, 49. [Google Scholar] [CrossRef]
  22. Skaramagkas, V.; Giannakakis, G.; Ktistakis, E.; Manousos, D.; Karatzanis, I.; Tachos, N.S.; Tripoliti, E.; Marias, K.; Fotiadis, D.I.; Tsiknakis, M. Review of eye tracking metrics involved in emotional and cognitive processes. IEEE Rev. Biomed. Eng. 2021, 16, 260–277. [Google Scholar] [CrossRef]
  23. Cognolato, M.; Gijsberts, A.; Gregori, V.; Saetta, G.; Giacomino, K.; Hager, A.G.M.; Gigli, A.; Faccio, D.; Tiengo, C.; Bassetto, F.; et al. Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics. Sci. Data 2020, 7, 43. [Google Scholar] [CrossRef]
  24. Hikosaka, O.; Takikawa, Y.; Kawagoe, R. Role of the basal ganglia in the control of purposive saccadic eye movements. Physiol. Rev. 2000, 80, 953–978. [Google Scholar] [CrossRef]
  25. Stephens-Fripp, B.; Alici, G.; Mutlu, R. A Review of Non-Invasive Sensory Feedback Methods for Transradial Prosthetic Hands. IEEE Access 2018, 6, 6878–6899. [Google Scholar] [CrossRef]
  26. Marinelli, A.; Boccardo, N.; Tessari, F.; Di Domenico, D.; Caserta, G.; Canepa, M.; Gini, G.; Barresi, G.; Laffranchi, M.; De Michieli, L.; et al. Active upper limb prostheses: A review on current state and upcoming breakthroughs. Prog. Biomed. Eng. 2022, 5, 012001. [Google Scholar] [CrossRef]
  27. Parr, J.V.V.; Wright, D.J.; Uiga, L.; Marshall, B.; Mohamed, M.O.; Wood, G. A scoping review of the application of motor learning principles to optimize myoelectric prosthetic hand control. Prosthetics Orthot. Int. 2022, 46, 274–281. [Google Scholar] [CrossRef]
  28. Wijk, U.; Carlsson, I.K.; Antfolk, C.; Björkman, A.; Rosén, B. Sensory Feedback in Hand Prostheses: A Prospective Study of Everyday Use. Front. Neurosci. 2020, 14, 663. [Google Scholar] [CrossRef]
  29. Jabban, L.; Dupan, S.; Zhang, D.; Ainsworth, B.; Nazarpour, K.; Metcalfe, B.W. Sensory feedback for upper-limb prostheses: Opportunities and barriers. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 738–747. [Google Scholar] [CrossRef]
  30. Roche, A.D.; Bailey, Z.K.; Gonzalez, M.; Vu, P.P.; Chestek, C.A.; Gates, D.H.; Kemp, S.W.; Cederna, P.S.; Ortiz-Catalan, M.; Aszmann, O.C. Upper limb prostheses: Bridging the sensory gap. J. Hand Surg. (Eur. Vol.) 2023, 48, 182–190. [Google Scholar] [CrossRef]
  31. Rodriguez-Cheu, L.E.; Casals, A. Sensing and control of a prosthetic hand with myoelectric feedback. In Proceedings of the The First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, 2006. BioRob 2006, Pisa, Italy, 20–22 February 2006; pp. 607–612. [Google Scholar]
  32. Hamid, Q.; Hasan, W.W.; Hanim, M.A.; Nuraini, A.; Hamidon, M.; Ramli, H. Shape memory alloys actuated upper limb devices: A review. Sensors Actuators Rep. 2023, 5, 100160. [Google Scholar] [CrossRef]
  33. Bensmaia, S.J.; Tyler, D.J.; Micera, S. Restoration of sensory information via bionic hands. Nat. Biomed. Eng. 2023, 7, 443–455. [Google Scholar] [CrossRef]
  34. Sensinger, J.W.; Dosen, S. A Review of Sensory Feedback in Upper-Limb Prostheses From the Perspective of Human Motor Control. Front. Neurosci. 2020, 14, 345. [Google Scholar] [CrossRef]
  35. Guo, W.; Xu, W.; Zhao, Y.; Shi, X.; Sheng, X.; Zhu, X. Towards Human-in-the-Loop Shared Control for Upper-Limb Prostheses: A Systematic Analysis of State-of-the-Art Technologies. IEEE Trans. Med. Robot. Bionics 2023, 5, 563–579. [Google Scholar] [CrossRef]
  36. Svensson, P.; Malesevic, N.; Wijk, U.; Björkman, A.; Antfolk, C. The Rubber Hand Illusion evaluated using different stimulation modalities. Front. Neurosci. 2023, 17, 1237053. [Google Scholar] [CrossRef]
  37. Marinelli, A.; Boccardo, N.; Canepa, M.; Di Domenico, D.; Semprini, M.; Chiappalone, M.; Laffranchi, M.; De Michieli, L.; Dosen, S. A Novel Method for Vibrotactile Proprioceptive Feedback Using Spatial Encoding and Gaussian Interpolation. IEEE Trans. Biomed. Eng. 2023, 1–12. [Google Scholar] [CrossRef]
  38. Dey, A.; Basumatary, H.; Hazarika, S.M. A Decade of Haptic Feedback for Upper Limb Prostheses. IEEE Trans. Med. Robot. Bionics 2023, 5, 793–810. [Google Scholar] [CrossRef]
  39. Graziano, M. The organization of behavioral repertoire in motor cortex. Annu. Rev. Neurosci. 2006, 29, 105–134. [Google Scholar] [CrossRef]
  40. Andersen, R.A.; Cui, H. Intention, action planning, and decision making in parietal-frontal circuits. Neuron 2009, 63, 568–583. [Google Scholar] [CrossRef]
  41. Krakauer, J.W.; Mazzoni, P. Human sensorimotor learning: Adaptation, skill, and beyond. Curr. Opin. Neurobiol. 2011, 21, 636–644. [Google Scholar] [CrossRef]
  42. Dayan, E.; Cohen, L.G. Neuroplasticity subserving motor skill learning. Neuron 2011, 72, 443–454. [Google Scholar] [CrossRef]
  43. Biddiss, E.A.; Chau, T.T. Upper limb prosthesis use and abandonment: A survey of the last 25 years. Prosthetics Orthot. Int. 2007, 31, 236–257. [Google Scholar] [CrossRef]
  44. van der Riet, D.; Stopforth, R.; Bright, G.; Diegel, O. An overview and comparison of upper limb prosthetics. In Proceedings of the 2013 Africon, Pointe aux Piments, Mauritius, 9–12 September 2013; pp. 1–8. [Google Scholar] [CrossRef]
  45. Luu, D.K.; Nguyen, A.T.; Jiang, M.; Drealan, M.W.; Xu, J.; Wu, T.; kin Tam, W.; Zhao, W.; Lim, B.Z.H.; Overstreet, C.K.; et al. Artificial Intelligence Enables Real-Time and Intuitive Control of Prostheses via Nerve Interface, 2022. arXiv 2022, arXiv:2203.08648. [Google Scholar]
  46. Zecca, M.; Micera, S.; Carrozza, M.C.; Dario, P. Control of multifunctional prosthetic hands by processing the electromyographic signal. Crit. Rev. Biomed. Eng. 2002, 30, 459–485. [Google Scholar] [CrossRef]
  47. Kuiken, T.A.; Li, G.; Lock, B.A.; Lipschutz, R.D.; Miller, L.A.; Stubblefield, K.A.; Englehart, K.B. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA 2009, 301, 619–628. [Google Scholar] [CrossRef]
  48. Gentile, C.; Cordella, F.; Zollo, L. Hierarchical Human-Inspired Control Strategies for Prosthetic Hands. Sensors 2022, 22, 2521. [Google Scholar] [CrossRef]
  49. Jiang, N.; Dosen, S.; Muller, K.R.; Farina, D. Myoelectric control of artificial limbs—Is there a need to change focus? [In the spotlight]. IEEE Signal Process. Mag. 2012, 29, 150–152. [Google Scholar] [CrossRef]
  50. Cipriani, C.; Zaccone, F.; Micera, S.; Carrozza, M.C. On the shared control of an EMG-controlled prosthetic hand: Analysis of user–prosthesis interaction. IEEE Trans. Robot. 2008, 24, 170–184. [Google Scholar] [CrossRef]
  51. Jones, L.A.; Lederman, S.J. Human Hand Function; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  52. Raspopovic, S.; Capogrosso, M.; Petrini, F.M.; Bonizzato, M.; Rigosa, J.; Di Pino, G.; Carpaneto, J.; Controzzi, M.; Boretius, T.; Fernandez, E.; et al. Restoring natural sensory feedback in real-time bidirectional hand prostheses. Sci. Transl. Med. 2014, 6, 222ra19. [Google Scholar] [CrossRef]
  53. Zollo, L.; Di Pino, G.; Ciancio, A.L.; Ranieri, F.; Cordella, F.; Gentile, C.; Noce, E.; Romeo, E.A.; Dellacasa Bellingegni, A.; Vadalà, G.; et al. Restoring tactile sensations via neural interfaces for real-time force-and-slippage closed-loop control of bionic hands. Sci. Robot. 2019, 4, eaau9924. [Google Scholar] [CrossRef]
  54. Iberite, F.; Muheim, J.; Akouissi, O.; Gallo, S.; Rognini, G.; Morosato, F.; Clerc, A.; Kalff, M.; Gruppioni, E.; Micera, S.; et al. Restoration of natural thermal sensation in upper-limb amputees. Science 2023, 380, 731–735. [Google Scholar] [CrossRef]
  55. Cordella, F.; Gentile, C.; Zollo, L.; Barone, R.; Sacchetti, R.; Davalli, A.; Siciliano, B.; Guglielmelli, E. A force-and-slippage control strategy for a poliarticulated prosthetic hand. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3524–3529. [Google Scholar]
  56. Kyberd, P. Slip Detection Strategies for Automatic Grasping in Prosthetic Hands. Sensors 2023, 23, 4433. [Google Scholar] [CrossRef]
  57. Johansson, R.S.; Westling, G. Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Exp. Brain Res. 1984, 56, 550–564. [Google Scholar] [CrossRef]
  58. Cordella, F.; Ciancio, A.L.; Sacchetti, R.; Davalli, A.; Cutti, A.G.; Guglielmelli, E.; Zollo, L. Literature review on needs of upper limb prosthesis users. Front. Neurosci. 2016, 10, 209. [Google Scholar] [CrossRef]
  59. Salvietti, G. Replicating human hand synergies onto robotic hands: A review on software and hardware strategies. Front. Neurorobotics 2018, 12, 27. [Google Scholar] [CrossRef]
  60. Jabban, L.; Metcalfe, B.W.; Raines, J.; Zhang, D.; Ainsworth, B. Experience of adults with upper-limb difference and their views on sensory feedback for prostheses: A mixed methods study. J. Neuroeng. Rehabil. 2022, 19, 80. [Google Scholar] [CrossRef]
  61. Clement, R.; Bugler, K.E.; Oliver, C.W. Bionic prosthetic hands: A review of present technology and future aspirations. Surgeon 2011, 9, 336–340. [Google Scholar] [CrossRef]
  62. Fougner, A.; Stavdahl, Ø.; Kyberd, P.J.; Losier, Y.G.; Parker, P.A. Control of upper limb prostheses: Terminology and proportional myoelectric control—A review. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 663–677. [Google Scholar] [CrossRef]
  63. Dollar, A.M.; Howe, R.D. The highly adaptive SDM hand: Design and performance evaluation. Int. J. Robot. Res. 2010, 29, 585–597. [Google Scholar] [CrossRef]
  64. Ansuini, C.; Santello, M.; Massaccesi, S.; Castiello, U. Effects of end-goal on hand shaping. J. Neurophysiol. 2006, 95, 2456–2465. [Google Scholar] [CrossRef]
  65. Todorov, E. Optimality principles in sensorimotor control. Nat. Neurosci. 2004, 7, 907–915. [Google Scholar] [CrossRef]
  66. Husain, M. Neural control of hand movement. Brain 2022, 145, 1191–1192. [Google Scholar] [CrossRef]
  67. Areas of the Brain Involved in Movement—Psychology Info. Available online: https://psychology-info.com/areas-of-the-brain-involved-inmovement (accessed on 4 July 2023).
  68. Tanji, J.; Shima, K. Role for supplementary motor area cells in planning several movements ahead. Nature 1994, 371, 413–416. [Google Scholar] [CrossRef]
  69. Sowden, S.; Catmur, C. The role of the right temporoparietal junction in the control of imitation. Cereb. Cortex 2015, 25, 1107–1113. [Google Scholar] [CrossRef]
  70. Doyon, J.; Bellec, P.; Amsel, R.; Penhune, V.; Monchi, O.; Carrier, J.; Lehéricy, S.; Benali, H. Contributions of the basal ganglia and functionally related brain structures to motor learning. Behav. Brain Res. 2009, 199, 61–75. [Google Scholar] [CrossRef]
  71. Graybiel, A.M. The basal ganglia and chunking of action repertoires. Neurobiol. Learn. Mem. 1998, 70, 119–136. [Google Scholar] [CrossRef]
  72. Thach, W.T.; Goodkin, H.; Keating, J. The cerebellum and the adaptive coordination of movement. Annu. Rev. Neurosci. 1992, 15, 403–442. [Google Scholar] [CrossRef]
  73. Alexander, G.E.; DeLong, M.R.; Strick, P.L. Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Annu. Rev. Neurosci. 1986, 9, 357–381. [Google Scholar] [CrossRef]
  74. Porter, R.; Lemon, R. Corticospinal Function and Voluntary Movement; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  75. Rossini, P.M.; Burke, D.; Chen, R.; Cohen, L.G.; Daskalakis, Z.; Di Iorio, R.; Di Lazzaro, V.; Ferreri, F.; Fitzgerald, P.; George, M.S.; et al. Non-invasive electrical and magnetic stimulation of the brain, spinal cord, roots and peripheral nerves: Basic principles and procedures for routine clinical and research application. An updated report from an IFCN Committee. Clin. Neurophysiol. 2015, 126, 1071–1107. [Google Scholar] [CrossRef]
  76. Rizzolatti, G.; Craighero, L. The mirror-neuron system. Annu. Rev. Neurosci. 2004, 27, 169–192. [Google Scholar] [CrossRef]
  77. Catani, M.; De Schotten, M.T. A diffusion tensor imaging tractography atlas for virtual in vivo dissections. Cortex 2008, 44, 1105–1132. [Google Scholar] [CrossRef]
  78. Makris, N.; Angelone, L.; Tulloch, S.; Sorg, S.; Kaiser, J.; Kennedy, D.; Bonmassar, G. MRI-based anatomical model of the human head for specific absorption rate mapping. Med. Biol. Eng. Comput. 2008, 46, 1239–1251. [Google Scholar] [CrossRef]
  79. Kaas, J.H.; Pons, T. The Somatosensory System of Primates. In Comparative Primate Biology, Neurosciences; Steklis, H.D., Erwin, J., Eds.; Alan R. Liss, Inc.: New York, NY, USA, 1986; pp. 421–468. [Google Scholar]
  80. Willis, W., Jr. Pain pathways in the primate. Prog. Clin. Biol. Res. 1985, 176, 117–133. [Google Scholar]
  81. Melzack, R.; Wall, P.D. Pain Mechanisms: A New Theory: A gate control system modulates sensory input from the skin before it evokes pain perception and response. Science 1965, 150, 971–979. [Google Scholar] [CrossRef]
  82. D’angelo, E.; Mazzarello, P.; Prestori, F.; Mapelli, J.; Solinas, S.; Lombardo, P.; Cesana, E.; Gandolfi, D.; Congi, L. The cerebellar network: From structure to function and dynamics. Brain Res. Rev. 2011, 66, 5–15. [Google Scholar] [CrossRef]
  83. Bolognini, N.; Maravita, A. Uncovering multisensory processing through non-invasive brain stimulation. Front. Psychol. 2011, 2, 46. [Google Scholar] [CrossRef]
  84. Edwards, L.L.; King, E.M.; Buetefisch, C.M.; Borich, M.R. Putting the “sensory” into sensorimotor control: The role of sensorimotor integration in goal-directed hand movements after stroke. Front. Integr. Neurosci. 2019, 13, 16. [Google Scholar] [CrossRef]
  85. Stein, B.E.; Stanford, T.R. Multisensory integration: Current issues from the perspective of the single neuron. Nat. Rev. Neurosci. 2008, 9, 255–266. [Google Scholar] [CrossRef]
  86. Stein, B.E.; Stanford, T.R.; Rowland, B.A. The neural basis of multisensory integration in the midbrain: Its organization and maturation. Hear. Res. 2009, 258, 4–15. [Google Scholar] [CrossRef]
  87. Binkofski, F.; Fink, G.R.; Geyer, S.; Buccino, G.; Gruber, O.; Shah, N.J.; Taylor, J.G.; Seitz, R.J.; Zilles, K.; Freund, H.J. Neural activity in human primary motor cortex areas 4a and 4p is modulated differentially by attention to action. J. Neurophysiol. 2002, 88, 514–519. [Google Scholar] [CrossRef]
  88. Nitschke, M.; Arp, T.; Stavrou, G.; Erdmann, C.; Heide, W. The cerebellum in the cerebro-cerebellar network for the control of eye and hand movements—An fMRI study. Prog. Brain Res. 2005, 148, 151–164. [Google Scholar] [PubMed]
  89. Nightingale, J.; Sedgewick, E.M. Control of Movement via Skeletal Muscles. 1979. Available online: https://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=PASCAL8050160739 (accessed on 3 March 2023).
  90. Nightingale, J.M. Microprocessor control of an artificial arm. J. Microcomput. Appl. 1985, 8, 167–173. [Google Scholar] [CrossRef]
  91. Wolpert, D.M.; Kawato, M. Multiple paired forward and inverse models for motor control. Neural Netw. 1998, 11, 1317–1329. [Google Scholar] [CrossRef]
  92. Körding, K.P.; Wolpert, D.M. Bayesian integration in sensorimotor learning. Nature 2004, 427, 244–247. [Google Scholar] [CrossRef]
  93. Seminara, L.; Dosen, S.; Mastrogiovanni, F.; Bianchi, M.; Watt, S.; Beckerle, P.; Nanayakkara, T.; Drewing, K.; Moscatelli, A.; Klatzky, R.L.; et al. A hierarchical sensorimotor control framework for human-in-the-loop robotic hands. Sci. Robot. 2023, 8, eadd5434. [Google Scholar] [CrossRef]
  94. Jeannerod, M.; Arbib, M.A.; Rizzolatti, G.; Sakata, H. Grasping objects: The cortical mechanisms of visuomotor transformation. Trends Neurosci. 1995, 18, 314–320. [Google Scholar] [CrossRef]
  95. Desmurget, M.; Grafton, S. Forward modeling allows feedback control for fast reaching movements. Trends Cogn. Sci. 2000, 4, 423–431. [Google Scholar] [CrossRef]
  96. Castellini, C.; Van Der Smagt, P. Surface EMG in advanced hand prosthetics. Biol. Cybern. 2009, 100, 35–47. [Google Scholar] [CrossRef]
  97. Rosenbaum, D.A.; Meulenbroek, R.J.; Vaughan, J.; Jansen, C. Posture-based motion planning: Applications to grasping. Psychol. Rev. 2001, 108, 709. [Google Scholar] [CrossRef]
  98. Gentilucci, M.; Benuzzi, F.; Bertolani, L.; Daprati, E.; Gangitano, M. Language and motor control. Exp. Brain Res. 2000, 133, 468–490. [Google Scholar] [CrossRef]
  99. Castiello, U. The neuroscience of grasping. Nat. Rev. Neurosci. 2005, 6, 726. [Google Scholar] [CrossRef]
  100. Becchio, C.; Sartori, L.; Castiello, U. Toward you: The social side of actions. Curr. Dir. Psychol. Sci. 2010, 19, 183–188. [Google Scholar] [CrossRef]
  101. Noce, E.; Gentile, C.; Cordella, F.; Ciancio, A.; Piemonte, V.; Zollo, L. Grasp control of a prosthetic hand through peripheral neural signals. J. Physics: Conf. Ser. 2018, 1026, 012006. [Google Scholar] [CrossRef]
  102. Leone, F.; Gentile, C.; Ciancio, A.L.; Gruppioni, E.; Davalli, A.; Sacchetti, R.; Guglielmelli, E.; Zollo, L. Simultaneous sEMG classification of wrist/hand gestures and forces. Front. Neurorobotics 2019, 13, 42. [Google Scholar] [CrossRef]
  103. Hochberg, L.R.; Bacher, D.; Jarosiewicz, B.; Masse, N.Y.; Simeral, J.D.; Vogel, J.; Haddadin, S.; Liu, J.; Cash, S.S.; Van Der Smagt, P.; et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 2012, 485, 372–375. [Google Scholar] [CrossRef]
  104. Jeannerod, M. The timing of natural prehension movements. J. Mot. Behav. 1984, 16, 235–254. [Google Scholar] [CrossRef]
  105. Rosenbaum, D.A.; Cohen, R.G.; Jax, S.A.; Weiss, D.J.; Van Der Wel, R. The problem of serial order in behavior: Lashley’s legacy. Hum. Mov. Sci. 2007, 26, 525–554. [Google Scholar] [CrossRef]
  106. Goodale, M.A.; Milner, A.D. Separate visual pathways for perception and action. Trends Neurosci. 1992, 15, 20–25. [Google Scholar] [CrossRef]
  107. Johansson, R.S.; Flanagan, J.R. Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci. 2009, 10, 345. [Google Scholar] [CrossRef]
  108. Kober, J.; Peters, J. Policy search for motor primitives in robotics. In Advances in Neural Information Processing Systems; Curran: Red Hook, NY, USA, 2008. [Google Scholar]
  109. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  110. Bizzi, E.; Cheung, V.C. The neural origin of muscle synergies. Front. Comput. Neurosci. 2013, 7, 51. [Google Scholar] [CrossRef]
  111. Scott, S.H. Optimal feedback control and the neural basis of volitional motor control. Nat. Rev. Neurosci. 2004, 5, 532–545. [Google Scholar] [CrossRef]
  112. Romeo, R.A.; Oddo, C.; Carrozza, M.C.; Guglielmelli, E.; Zollo, L. Slippage detection with piezoresistive tactile sensors. Sensors 2017, 17, 1844. [Google Scholar] [CrossRef]
  113. Stefanelli, E.; Cordella, F.; Gentile, C.; Zollo, L. Hand Prosthesis Sensorimotor Control Inspired by the Human Somatosensory System. Robotics 2023, 12, 136. [Google Scholar] [CrossRef]
  114. Hogan, N.; Sternad, D. Sensitivity of smoothness measures to movement duration, amplitude, and arrests. J. Mot. Behav. 2009, 41, 529–534. [Google Scholar] [CrossRef]
  115. Mussa-Ivaldi, F.A.; Bizzi, E. Motor learning through the combination of primitives. Philos. Trans. R. Soc. London. Ser. Biol. Sci. 2000, 355, 1755–1769. [Google Scholar] [CrossRef]
  116. Cutkosky, M.R. On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Trans. Robot. Autom. 1989, 5, 269–279. [Google Scholar] [CrossRef]
  117. Santello, M.; Flanders, M.; Soechting, J.F. Postural hand synergies for tool use. J. Neurosci. 1998, 18, 10105–10115. [Google Scholar] [CrossRef]
  118. Shadmehr, R.; Mussa-Ivaldi, F.A. Adaptive representation of dynamics during learning of a motor task. J. Neurosci. 1994, 14, 3208–3224. [Google Scholar] [CrossRef]
  119. Wolpert, D.M.; Ghahramani, Z.; Jordan, M.I. An internal model for sensorimotor integration. Science 1995, 269, 1880–1882. [Google Scholar] [CrossRef]
  120. Santello, M.; Bianchi, M.; Gabiccini, M.; Ricciardi, E.; Salvietti, G.; Prattichizzo, D.; Ernst, M.; Moscatelli, A.; Jörntell, H.; Kappers, A.M.; et al. Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands. Phys. Life Rev. 2016, 17, 1–23. [Google Scholar] [CrossRef]
  121. Bernstein, N. The Co-Ordination and Regulation of Movements; Pergamo Press: London, UK, 1967. [Google Scholar]
  122. Grioli, G.; Wolf, S.; Garabini, M.; Catalano, M.; Burdet, E.; Caldwell, D.; Carloni, R.; Friedl, W.; Grebenstein, M.; Laffranchi, M.; et al. Variable stiffness actuators: The user’s point of view. Int. J. Robot. Res. 2015, 34, 727–743. [Google Scholar] [CrossRef]
  123. Micera, S.; Citi, L.; Rigosa, J.; Carpaneto, J.; Raspopovic, S.; Di Pino, G.; Rossini, L.; Yoshida, K.; Denaro, L.; Dario, P.; et al. Decoding information from neural signals recorded using intraneural electrodes: Toward the development of a neurocontrolled hand prosthesis. Proc. IEEE 2010, 98, 407–417. [Google Scholar] [CrossRef]
  124. Meattini, R.; Benatti, S.; Scarcia, U.; De Gregorio, D.; Benini, L.; Melchiorri, C. An sEMG-based human-robot interface for robotic hands using machine learning and synergies. IEEE Trans. Components, Packag. Manuf. Technol. 2018, 8, 1149–1158. [Google Scholar] [CrossRef]
  125. Carrozza, M.C.; Suppo, C.; Sebastiani, F.; Massa, B.; Vecchi, F.; Lazzarini, R.; Cutkosky, M.R.; Dario, P. The SPRING hand: Development of a self-adaptive prosthesis for restoring natural grasping. Auton. Robot. 2004, 16, 125–141. [Google Scholar] [CrossRef]
  126. Controzzi, M.; Cipriani, C.; Jehenne, B.; Donati, M.; Carrozza, M.C. Bio-inspired mechanical design of a tendon-driven dexterous prosthetic hand. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 499–502. [Google Scholar]
  127. Ajoudani, A.; Godfrey, S.B.; Bianchi, M.; Catalano, M.G.; Grioli, G.; Tsagarakis, N.; Bicchi, A. Exploring teleimpedance and tactile feedback for intuitive control of the pisa/iit softhand. IEEE Trans. Haptics 2014, 7, 203–215. [Google Scholar] [CrossRef]
  128. Santello, M.; Soechting, J.F. Gradual molding of the hand to object contours. J. Neurophysiol. 1998, 79, 1307–1320. [Google Scholar] [CrossRef]
  129. Abu-Dakka, F.J.; Saveriano, M. Variable impedance control and learning—A review. Front. Robot. 2020, 7, 590681. [Google Scholar] [CrossRef]
  130. Ajoudani, A.; Tsagarakis, N.G.; Bicchi, A. Tele-impedance: Preliminary results on measuring and replicating human arm impedance in tele operated robots. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, Thailand, 7–11 December 2011; pp. 216–222. [Google Scholar]
  131. Meattini, R.; Suárez, R.; Palli, G.; Melchiorri, C. Human to Robot Hand Motion Mapping Methods: Review and Classification. IEEE Trans. Robot. 2022, 39, 842–861. [Google Scholar] [CrossRef]
  132. Meattini, R.; Benatti, S.; Scarcia, U.; Benini, L.; Melchiorri, C. Experimental evaluation of a sEMG-based human-robot interface for human-like grasping tasks. In Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 1030–1035. [Google Scholar]
  133. Meattini, R.; Chiaravalli, D.; Palli, G.; Melchiorri, C. sEMG-based human-in-the-loop control of elbow assistive robots for physical tasks and muscle strength training. IEEE Robot. Autom. Lett. 2020, 5, 5795–5802. [Google Scholar] [CrossRef]
  134. Jafarzadeh, M.; Hussey, D.C.; Tadesse, Y. Deep learning approach to control of prosthetic hands with electromyography signals. In Proceedings of the 2019 IEEE International Symposium on Measurement and Control in Robotics (ISMCR), Houston, TX, USA, 19–21 September 2019. [Google Scholar] [CrossRef]
  135. Cognolato, M.; Atzori, M.; Gassert, R.; Müller, H. Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping. Front. Artif. Intell. 2022, 4, 744476. [Google Scholar] [CrossRef]
  136. Rasouli, M.; Ghosh, R.; Lee, W.W.; Thakor, N.V.; Kukreja, S. Stable force-myographic control of a prosthetic hand using incremental learning. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 4828–4831. [Google Scholar] [CrossRef]
  137. Kristoffersen, M.B.; Franzke, A.W.; Bongers, R.M.; Wand, M.; Murgia, A.; van der Sluis, C.K. User training for machine learning controlled upper limb prostheses: A serious game approach. J. Neuroeng. Rehabil. 2021, 18, 32. [Google Scholar] [CrossRef]
  138. Huang, Z.; Zheng, J.; Zhao, L.; Chen, H.; Jiang, X.; Zhang, X. DL-Net: Sparsity Prior Learning for Grasp Pattern Recognition. IEEE Access 2023, 11, 6444–6451. [Google Scholar] [CrossRef]
  139. Triwiyanto, T.; Maghfiroh, A.M.; Musvika, S.D.; Amrinsani, F.; Syaifudin; Mak’ruf, R.; Rachmat, N.; Caesarendra, W.; Sulowicz, M. State of the Art Methods of Machine Learning for Prosthetic Hand Development: A Review. In Proceedings of the 3rd International Conference on Electronics, Biomedical Engineering, and Health Informatics: ICEBEHI 2022, Surabaya, Indonesia, 5–6 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 555–574. [Google Scholar]
  140. Bicchi, A. Hands for dexterous manipulation and robust grasping: A difficult road toward simplicity. IEEE Trans. Robot. Autom. 2000, 16, 652–662. [Google Scholar] [CrossRef]
  141. Cullen, D.K.; Smith, D.H. How artificial arms could connect to the nervous system. Sci. Americian 2013, 14, 52–57. [Google Scholar]
  142. Freud, E.; Behrmann, M.; Snow, J.C. What does dorsal cortex contribute to perception? Open Mind 2020, 4, 40–56. [Google Scholar] [CrossRef]
  143. Georgopoulos, A.P.; Schwartz, A.B.; Kettner, R.E. Neuronal population coding of movement direction. Science 1986, 233, 1416–1419. [Google Scholar] [CrossRef]
  144. Righetti, L.; Ijspeert, A.J. Programmable central pattern generators: An application to biped locomotion control. In Proceedings of the Proceedings 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 1585–1590. [Google Scholar]
  145. Velliste, M.; Perel, S.; Spalding, M.C.; Whitford, A.S.; Schwartz, A.B. Cortical control of a prosthetic arm for self-feeding. Nature 2008, 453, 1098–1101. [Google Scholar] [CrossRef]
  146. Yadav, D.; Veer, K. Recent trends and challenges of surface electromyography in prosthetic applications. Biomed. Eng. Lett. 2023, 13, 353–373. [Google Scholar] [CrossRef]
  147. Wang, S.; Zheng, J.; Huang, Z.; Zhang, X.; Prado da Fonseca, V.; Zheng, B.; Jiang, X. Integrating computer vision to prosthetic hand control with sEMG: Preliminary results in grasp classification. Front. Robot. 2022, 9, 948238. [Google Scholar] [CrossRef]
  148. Zhang, X.; Baun, K.S.; Trent, L.; Miguelez, J.M.; Kontson, K.L. Factors influencing perceived function in the upper limb prosthesis user population. PM&R 2023, 15, 69–79. [Google Scholar]
  149. Keskinbora, K.H. Medical ethics considerations on artificial intelligence. J. Clin. Neurosci. 2019, 64, 277–282. [Google Scholar] [CrossRef]
  150. Gordon, J.S. AI and law: Ethical, legal, and socio-political implications. AI Soc. 2021, 36, 403–404. [Google Scholar] [CrossRef]
  151. Weiner, P.; Starke, J.; Rader, S.; Hundhausen, F.; Asfour, T. Designing prosthetic hands with embodied intelligence: The kit prosthetic hands. Front. Neurorobotics 2022, 16, 815716. [Google Scholar] [CrossRef]
  152. Nayak, S.; Das, R.K. Application of artificial intelligence (AI) in prosthetic and orthotic rehabilitation. In Service Robotics; IntechOpen: Rijeka, Croatia, 2020. [Google Scholar]
  153. Stahl, B.C.; Stahl, B.C. Ethical issues of AI. In Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies; Springer: Cham, Switzerland, 2021; pp. 35–53. [Google Scholar]
  154. Dignum, V. Ethics in artificial intelligence: Introduction to the special issue. Ethics Inf. Technol. 2018, 20, 1–3. [Google Scholar] [CrossRef]
  155. Li, W.; Shi, P.; Yu, H. Gesture Recognition Using Surface Electromyography and Deep Learning for Prostheses Hand: State-of-the-Art, Challenges, and Future. Front. Neurosci. 2021, 15, 621885. [Google Scholar] [CrossRef]
  156. Pfeiffer, B.E.; Foster, D.J. Hippocampal place-cell sequences depict future paths to remembered goals. Nature 2013, 497, 74–79. [Google Scholar] [CrossRef]
  157. Philpot, B.; Bear, M.; Abraham, W. Metaplasticity: The plasticity of synaptic plasticity. In Beyond Neurotransmission: Neuromodulation and Its Importance for Information Processing; Oxford University Press: Oxford, UK, 1999; pp. 160–197. [Google Scholar]
  158. Kolb, B.; Whishaw, I.Q. Brain plasticity and behavior. Annu. Rev. Psychol. 1998, 49, 43–64. [Google Scholar] [CrossRef]
  159. Walker, M.P.; Stickgold, R. Sleep-dependent learning and memory consolidation. Neuron 2004, 44, 121–133. [Google Scholar] [CrossRef]
  160. Hocaoglu, E.; Patoglu, V. sEMG-based natural control interface for a variable stiffness transradial hand prosthesis. Front. Neurorobotics 2022, 16, 789341. [Google Scholar] [CrossRef]
  161. McGaugh, J.L. Memory–a century of consolidation. Science 2000, 287, 248–251. [Google Scholar] [CrossRef]
  162. Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef]
  163. Kumar, D.K.; Jelfs, B.; Sui, X.; Arjunan, S.P. Prosthetic hand control: A multidisciplinary review to identify strengths, shortcomings, and the future. Biomed. Signal Process. Control. 2019, 53, 101588. [Google Scholar] [CrossRef]
  164. Cognolato, M.; Graziani, M.; Giordaniello, F.; Saetta, G.; Bassetto, F.; Brugger, P.; Caputo, B.; Müller, H.; Atzori, M. Semi-automatic training of an object recognition system in scene camera data using gaze tracking and accelerometers. In Proceedings of the Computer Vision Systems: 11th International Conference, ICVS 2017, Shenzhen, China, 10–13 July 2017; Revised Selected Papers 11; Springer: Berlin/Heidelberg, Germany, 2017; pp. 175–184. [Google Scholar]
  165. McCloskey, M.; Cohen, N.J. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of Learning and Motivation; Elsevier: Amsterdam, The Netherlands, 1989; Volume 24, pp. 109–165. [Google Scholar]
  166. Kolb, B.; Gibb, R.; Robinson, T.E. Brain plasticity and behavior. Curr. Dir. Psychol. Sci. 2003, 12, 1–5. [Google Scholar] [CrossRef]
  167. Azevedo, F.A.; Carvalho, L.R.; Grinberg, L.T.; Farfel, J.M.; Ferretti, R.E.; Leite, R.E.; Filho, W.J.; Lent, R.; Herculano-Houzel, S. Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. J. Comp. Neurol. 2009, 513, 532–541. [Google Scholar] [CrossRef]
  168. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  169. Hassabis, D.; Kumaran, D.; Summerfield, C.; Botvinick, M. Neuroscience-inspired artificial intelligence. Neuron 2017, 95, 245–258. [Google Scholar] [CrossRef]
  170. Scott, D.N.; Frank, M.J. Adaptive control of synaptic plasticity integrates micro-and macroscopic network function. Neuropsychopharmacology 2023, 48, 121–144. [Google Scholar] [CrossRef]
  171. Kong, Y.; Liu, L.; Chen, H.; Kacprzyk, J.; Tao, D. Overcoming Catastrophic Forgetting in Continual Learning by Exploring Eigenvalues of Hessian Matrix. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
  172. Rebuffi, S.; Kolesnikov, A.; Sperl, G.; Lampert, C. iCaRL: Incremental classifier and representation learning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5533–5542. [Google Scholar]
  173. Loeffler, A.; Diaz-Alvarez, A.; Zhu, R.; Ganesh, N.; Shine, J.M.; Nakayama, T.; Kuncic, Z. Neuromorphic learning, working memory, and metaplasticity in nanowire networks. Sci. Adv. 2023, 9, eadg3289. [Google Scholar] [CrossRef]
  174. Cipriani, C.; Antfolk, C.; Controzzi, M.; Lundborg, G.; Rosén, B.; Carrozza, M.C.; Sebelius, F. Online myoelectric control of a dexterous hand prosthesis by transradial amputees. IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 260–270. [Google Scholar] [CrossRef]
  175. Farina, D.; Vujaklija, I.; Sartori, M.; Kapelner, T.; Negro, F.; Jiang, N.; Bergmeister, K.; Andalib, A.; Principe, J.; Aszmann, O.C. Man/machine interface based on the discharge timings of spinal motor neurons after targeted muscle reinnervation. Nat. Biomed. Eng. 2017, 1, 0025. [Google Scholar] [CrossRef]
Figure 1. Brain regions for hand control: (a) cortical (M1 = primary motor cortex, PMC = premotor cortex, SMA = supplementary motor area, PC = parietal cortex, TC = temporal cortex) and (b) subcortical.
Figure 1. Brain regions for hand control: (a) cortical (M1 = primary motor cortex, PMC = premotor cortex, SMA = supplementary motor area, PC = parietal cortex, TC = temporal cortex) and (b) subcortical.
Prosthesis 05 00083 g001
Figure 2. Workflow of a hierarchical control strategy.
Figure 2. Workflow of a hierarchical control strategy.
Prosthesis 05 00083 g002
Figure 3. Timeline of the control strategies developed from January 2022 to September 2023.
Figure 3. Timeline of the control strategies developed from January 2022 to September 2023.
Prosthesis 05 00083 g003
Table 1. Advantages and disadvantages of different control strategies for prosthetic hands.
Table 1. Advantages and disadvantages of different control strategies for prosthetic hands.
Control StrategyAdvantagesDisadvantages
EMG-based feedforward control- Noninvasive and easy to use
- Compatible with commercial prostheses
- Can provide high accuracy and reliability
- Sensitive to noise and interference
- Affected by muscle fatigue and electrode shift
- Limited by the number and quality of EMG channels
EEG-based feedforward control- Noninvasive and wireless
- Can access brain signals directly
- Can provide high flexibility and adaptability
- Sensitive to noise and artifacts
- Affected by user concentration and mental state
- Requires long training and calibration
Eye tracking-based feedforward control- Noninvasive and intuitive
- Can exploit natural visual attention
- Can provide high speed and precision
- Sensitive to noise and occlusion
- Affected by user fatigue and distraction
- Requires accurate calibration and alignment
Residual limb motion-based feedforward control- Noninvasive and natural
- Can exploit residual motor skills
- Can provide high dexterity and versatility
- Sensitive to noise and drift
- Affected by user comfort and stability
- Requires accurate mapping and scaling
Tactile feedback control- Can enhance object manipulation skills
- Can improve user confidence and satisfaction
- Can reduce visual attention demand
- Can cause sensory overload or adaptation
- Can be affected by user perception threshold
- Can require invasive neural interfaces
Proprioceptive feedback control- Can enhance hand posture awareness
- Can improve user embodiment and agency
- Can reduce cognitive load
- Can cause sensory mismatch or confusion
- Can be affected by user adaptation level
- Can require invasive neural interfaces
Auditory feedback control- Can provide simple and intuitive cues
- Can convey complex and abstract information
- Can be easily integrated with other feedback modalities
- Can cause auditory fatigue or annoyance
- Can interfere with environmental sounds
- Can require user learning and memorization
Visual feedback control- Can provide rich and realistic information
- Can facilitate user learning and training
- Can be easily integrated with other feedback modalities
- Can cause visual fatigue or distraction
- Can interfere with natural vision
- Can require additional devices or equipment
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gentile, C.; Gruppioni, E. A Perspective on Prosthetic Hands Control: From the Brain to the Hand. Prosthesis 2023, 5, 1184-1205. https://doi.org/10.3390/prosthesis5040083

AMA Style

Gentile C, Gruppioni E. A Perspective on Prosthetic Hands Control: From the Brain to the Hand. Prosthesis. 2023; 5(4):1184-1205. https://doi.org/10.3390/prosthesis5040083

Chicago/Turabian Style

Gentile, Cosimo, and Emanuele Gruppioni. 2023. "A Perspective on Prosthetic Hands Control: From the Brain to the Hand" Prosthesis 5, no. 4: 1184-1205. https://doi.org/10.3390/prosthesis5040083

Article Metrics

Back to TopTop