A review of the neurobiomechanical processes underlying secure gripping in object manipulation

O'SHEA, H. and S. J. Redmond. A review of the neurobiomechanical processes underlying secure gripping in object manipulation. NEUROSCI BIOBEHAV REV XXX-XXX, 2021. Humans display skilful control over the objects they manipulate, so much so that biomimetic systems have yet to emulate this remarkable behaviour. Two key control processes are assumed to facilitate such dexterity: predictive cognitive-motor processes that guide manipulation procedures by anticipating action outcomes; and reactive sensorimotor processes that provide important error-based information for movement adaptation. Notwithstanding increased interdisciplinary research interest in object manipulation behaviour, the complexity of the perceptual-sensorimotor-cognitive processes involved and the theoretical divide regarding the fundamentality of control mean that the essential mechanisms underlying manipulative action remain undetermined. In this paper, following a detailed discussion of the theoretical and empirical bases for understanding human dexterous movement, we emphasise the role of tactile-related sensory events in secure object handling, and consider the contribution of certain biophysical and biomechanical phenomena. We aim to provide an integrated account of the current state-of-art in skilled human-object interaction that bridges the literature in neuroscience, cognitive psychology, and biophysics. We also propose novel directions for future research exploration in this area.


Introduction
A fundamental aspect of human behaviour is the ability to reach, grasp (i.e., achieve a specific hand position in gripping an object; Miall et al., 2019) and interact with objects in the environment. This behaviour requires dexterous control over the objects being manipulated, and humans are exceptionally adept in this, exhibiting proficient skill by the age of four years (Paré and Dugas, 1999). Indeed, quantitative taxonomies of human hand grasps that provide detailed descriptions of hand/finger movement highlight the complexity of the kinematics involved (e.g., Stival et al., 2019) with the fingers and thumb possessing 25 degrees of freedom (i.e., independent states of motion; Touvet et al., 2012;Pitarch et al., 2005). Over the past few decades, motor coordination and object manipulation have been increasingly studied across disciplines, such as, cognitive psychology, cognitive and computational neuroscience, sport psychology, neuropsychology, engineering, artificial physical intelligence, and robotics (e.g., Veiga et al., 2020;Chen et al., 2018;McDonough et al., 2020;Bach et al., 2014;Jeannerod, 1997Jeannerod, , 2001Wolpert and Ghahramani, 2000;Franklin and Wolpert, 2011;Hamidreza Kasaei et al., 2018;Rosenbaum et al., 2012;Koziol et al., 2012;White, 2012;Grafton, 2010;Gibson, 1977). Notwithstanding this increasingly active research area, the precise (and fundamental) mechanisms underlying controlled human action remain somewhat unclear (e.g., Valero-Cuevas and Santello, 2017). This relates in part to the shear complexity of the processes involved during object handling; for example, dexterous manipulation must account for information relating to an object's intrinsic properties (e.g., texture, hardness, curvature), biophysical-biomechanical characteristics (e.g., of the hand, fingertip and skin and of joint kinematics), and interaction dynamics (e.g., forces, pressure, torsion), to generate the appropriate afferent/efferent signals (e.g., relating to sensory and motor systems) for accurate movement control. Additionally, divergent theoretical views exist regarding the relative contribution of motor, sensory, and cognitive systems to dexterous object manipulation which complicates the elaboration of an integrated account of skilled behaviour (Kawato, 1999;Wolpert and Ghahramani, 2000;Schöner, 2008;Foglia and O'Regan, 2016;Friston et al., 2009;Friston, 2010;Jeannerod, 2006). While these theoretical views (discussed in the next section) differ in the emphasis they place on the role of stored sensorimotor knowledge, anticipatory processes relating to sensorimotor outcomes, and sensory feedback, we adopt the view that (at least theoretically) both predictive (i.e., the internal representation of the sensory consequences of motor output based on prior knowledge and the current state of the sensorimotor system) and reactive (i.e., a rapid motor response that is reflex-mediated and/or based on error-related sensory feedback) processes essentially contribute to controlled movement (Wolpert and Ghahramani, 2000;Miall and Wolpert, 1996). Further, we consider sensory feedback to be crucial for initial sensorimotor learning during object handling, and for reactive control over objects during their manipulation. Accordingly, this review places particular emphasis on examining the role of tactile-related sensory processes in secure gripping and manipulation. In this regard, we aim to provide an overarching account of the biophysical, biomechanical, and biological sensory-related aspects of precision gripping during manipulation. To date, research has tended to focus on a specific movement parameter or biological characteristic in grasping behaviour; for example, to examine the role of skin wetness on manual function (Filingeri and Ackerley, 2017), force control (Nowak et al., 2013), the molecular basis of touch (Moehring et al., 2018), frictional properties at the finger-object interface (Derler and Gerhardt, 2012;Tomlinson et al., 2007), functional properties of cutaneous mechanoreceptors in touch (Roudaut et al., 2012), coding of sensory-tactile afferents/signals (Johansson and Flanagan, 2009), or tactile sensing technology (Chen et al., 2018;Kappassov et al., 2015;Tiwana et al., 2012). Therefore, we will integrate current research findings from across disciplines relating to general movement parameters (e.g., hand position, velocity), finger characteristics (e.g., size/orientation, fingerprint ridges), skin characteristics (e.g., elasticity, moisture/absorption, stretch), mechanosensation (e.g., receptor/afferent signals) and parameters at the finger-object interface (e.g., adjustment of force, presence of friction, tension, deformation) to comprehensively examine the role of biophysical/mechanical, sensory mechanisms underlying secure gripping during object manipulation. We firstly outline various theories on how sensorimotor and cognitive systems facilitate our ability to dexterously and securely manipulate objects. Subsequently, we discuss the biomechanical, biophysical and biological sensory-related parameters that relate to human-object interactions and influence the security of grasping. We will conclude with an integrated discussion of the current understanding of human secure gripping behaviour and we will identify potential future research directions.

Theories on the human capacity for skilful object manipulation
Humans are exceptional in their use and dexterous manipulation of objects for specific, complex purposes where it is necessary to mechanically reason and understand physical causal relationships to achieve functional goals (Johnson-Frey, 2003). Over time, humans acquire sensorimotor and object knowledge through interactions with the environment. By acting on the environment we learn relationships between sensory inputs and motor outputs (i.e., action and its effect), including the laws that characterise and control our movements which can be used implicitly during action programming (e.g., Fitts' speed-accuracy trade off, Hick's action choice-reaction time correspondence, etc.; Jeannerod, 2006). For example, although the required reaching movements to either grasp and place an object (in an opening) or grasp and throw an object are similar, the reaching duration is typically slower during the placing action than during the throwing, thus, demonstrating a sacrifice of speed for accuracy (Marteniuk et al., 1987). Different theories on skilful movement exist and place different degrees of emphasis on the role of stored knowledge, predictive processing, and sensory feedback. In this regard, some researchers and theorists minimise ideas of distinct functional subsystems, and propose that skilful object manipulation is closely bound to perceptual-cognitive processes (i.e., ecological and enactive theories; for a summary of their theoretical postulates, see Table 1). For instance, there may be: a 'strict bond' between perception and action with little, if any, functional separation between the two processes (i.e., embodied, enactivist approaches; Thompson and Varela, 2001;Lucaites et al., 2020;Lobo et al., 2018;Borghi and Cimatti, 2010;Lepora and Pezzulo, 2015); 'affordance-matching', where perception of object properties is informed by object knowledge, and these perceptual-cognitive processes are interconnected with action processes (McDonough et al., 2020;Bach et al., 2014); or, knowledge of 'sensorimotor laws', which results from changes in environmental sensory inputs with movement and influences perception (Foglia and O'Regan, 2016;O'Regan and Noë, 2001).
The primary divisional principle between theories attempting to explain successful object gripping and manipulation is whether the motor-cognitive-perceptual processing occurring during movement relies on, and is causally related to, internal representations (i.e., mental knowledge structures of external events) or not (for a comparison between representational and non-representational theories, see Table 1). A detailed theoretical discussion of each of these is beyond the remit of the present review, and so, we will focus our discussion around one of the most well-developed and empirically supported theoretical frameworkspredictive motor control (Wolpert and Ghahramani, 2000;Wolpert and Flanagan, 2001;Kawato, 1999). Accordingly, for the purposes of our review, we will further expand on and apply the theoretical assumptions of this perspective to explore the relative contribution of stored knowledge, predictive processing, and sensory feedback to secure gripping during dexterous object manipulation.
The large body of motor control work provides a strong theoretical basis for how humans learn to appropriately control their grip during dexterous object manipulation (e.g., Wolpert and Ghahramani, 2000;Wolpert and Flanagan, 2001;Wolpert et al., 1995). Specifically, two key processes are postulated, one predictive process that transforms motor commands/output into anticipated sensory consequences (efferent to anticipated afferent signals), and another control-related process that transforms the (desired or actual) sensory consequences into motor commands (afferent to efferent signals; e.g., see Franklin and Wolpert, 2011). With experience, internal representations of both object properties (physical-and function-related properties) and sensory-motor coupling are formed. For instance, knowledge of the physical properties of objects (e.g., size, weight and weight distribution, material, etc.) and object function is acquired through experience, and we typically use information relating to object affordances (i.e., action possibilities in relation to the object and the actor; Lucaites et al., 2020) to inform and constrain our action planning for the upcoming movement (Bach et al., 2014;Osiurak, 2013). Skilled object manipulation exploits this knowledge of objects and their dynamics (i.e., the forces required for and involved in their movement) and the associated contact mechanics to predict the course of an action in relation to an object. This purported close connection between action and object knowledge facilitates movement control (Heald et al., 2018;Bach et al., 2014;McDonough et al., 2020). Further, knowledge of sensory-motor coupling is assumed to permit secure and precise gripping during manipulation as grip force can be scaled according to the predicted load (Johansson and Cole, 1992;Wolpert and Flanagan, 2001;Leib et al., 2015).
According to predictive motor control frameworks, combining prior knowledge with proprioceptive (i.e., a sense of finger, hand, and arm position in space), tactile, and visual sensory information provides more certainty (in terms of secure grip) than sensory feedback alone (for discussion on Bayesian estimation, see Franklin and Wolpert, 2011). For example, when individuals were trained to match a wide range of grip forces, they displayed fewer performance errors than those who experienced a narrower range of grip forces when later required to respond to uncertain force levels (Körding et al., 2004). Experienced-based and error-based learning (i.e., in relation to movement errors) seem to allow us to generate multiple context-specific internal representations relating to object dynamics and their corresponding movements, and this facilitates early adaptation of grip force during their manipulation (Heald et al., 2018;Fu and Santello, 2015;Ingram et al., 2011). Such representations can be rapidly assembled (especially for familiar objects) to respond to the object using visual input that provides contextual information (e.g., object orientation; Ingram et al., 2011).
Predictive processes permit anticipation of sensory consequences of a planned action. Even so, the tuning of on-going movement during manipulation relies on error-based feedback to update internal representations, facilitate movement adaptation, and generate motor output with minimal error (Ingram et al., 2017). This updating process is thought to rely on a comparator mechanism, where predicted and actual sensory outcomes are weighted in terms of the action goal, with motor output adjusted accordingly if required (Heald et al., 2018). An excellent illustration of the interaction between stored knowledge, predictive processing, and sensory feedback is the size/material-weight illusion (e. g., see Buckingham, 2014;Flanagan and Beltzner, 2000;Wolf et al., 2018;Hayward, 2008;Murray et al., 1999). Specifically, in the size-weight illusion individuals tend to judge the smaller of two equally-weighted objects to be heavier than the larger one. This illusion is thought to stem from prior expectation that leads to erroneous cognitive-perceptual experience, in that, because individuals believe that the smaller object will be lighter, when it is lifted it feels relatively heavy (Buckingham, 2014;but see, Freeman et al., 2019). However, research demonstrates that although the illusion is maintained at a cognitive representational level, whereby individuals continue to report the smaller object to be the heavier item, they typically display correct adjustment of grip force over repeated lifts (to note, humans typically avoid dropping an object by applying a grip force that is scaled according to the load force exerted by the object, and this grip-to-load force ratio is characteristically the minimal required for successful object manipulation; for further discussion on the scaling of grip-to-load force, see the later section "biomechanics and secure gripping"; Flanagan and Beltzner, 2000).
Incongruence between the appropriate scaling of grip-to-load force and incorrect verbal report during illusion tasks suggests that sensorimotor internal representations are adapted (in a reactive manner) according to sensory information (signalling 'true' information), but that this information remains (at least initially) outside of conscious awareness (i.e., limiting individuals' ability to consciously report it). To note, conscious awareness can be graded in nature, in that, behaviour can occur and be successful without individuals being able to subjectively report on their experience, perhaps due to a response bias stemming from prior knowledge and/or visual input (Persuh, 2017;Seth et al., 2008). It is possible that updated information (i.e., grip adjustment) might require a greater amount of time to enter consciousness and modify prior expectations. In support of the latter proposal, research demonstrates that in circumstances of induced time delays in force feedback, individuals underestimated the stiffness of an elastic force field (during verbal report) even though such time delays did not affect their ability to appropriately scale the grip force they apply to a tool when probing the force field (Leib et al., 2015). The influence of sensory feedback processes is also evident at the neural level, with research demonstrating that initial learning of force control is associated with changes in the primary somatosensory cortex, that are observed before changes in the primary motor cortex (which possibly depend on increases in excitability in the somatosensory cortex; Ohashi et al., 2019). Similarly, in contexts where finger positioning during object grasping is unconstrained and variable in nature (e.g., there are few visual cues as to where the fingers should ideally be placed to minimise torque), activity in the somatosensory cortex (in conjunction with activity in the primary motor cortex) is observed, and associated with rapid modulation of grip force to finger position (Parikh et al., 2020).
Overall, these interesting observations indicate that, regardless of whether grip-to-load force adjustments involve conscious processing or not (an issue that would benefit from future behavioural-level research),  Wolpert and Ghahramani, 2000;Wolpert and Flanagan, 2001;Kawato, 1999;Yang et al., 2016): Prior knowledge facilitates the assembly of internal representations (which comprise action goals, motor programs and predicted motor and sensory consequences) In conjunction with proprioceptive, tactile, and visual sensory information, representations facilitate and direct dexterous movement (by issuing appropriate motor commands) during object manipulation Sensory feedback is important for movement controlit signals discrepancies between desired/predicted and actual movement outcomes, updates internal representations and corrects errors in ongoing movement (e.g., O'Regan and Noë, 2001;Foglia and O'Regan, 2016): Object manipulation does not involve the assembly of internal representations or predictive processing, but instead, sensorimotor knowledge (i.e., practical knowledge of action-effect contingencies) facilitates object manipulation through knowledge of anticipated / possible outcomes Sensory information is important because it facilitates knowledge acquisition of the interdependencies between sensory input and motor output (i.e., the laws that operate in relation to the sensory changes produced by motor action) (Hommel, 2019;Hommel et al., 2001;Hommel, 2015): Object manipulation involves object perception and action planning, and the two processes share internal representations relevant to the present goal Sensory information is assumed to support the common perceptual and motor coding in representing external features because it contributes to the sensorimotor architecture, the foundation that perception and action rely on for encoding features of the external environment Previously experienced sensory (reafferent) patterns following action (motor output) generate action-effect codes that can be reactivated by either action or effect / perception Ecological / Embodied / Enactive (Thompson and Varela, 2001;Lucaites et al., 2020;Lobo et al., 2018;Borghi and Cimatti, 2010;Lepora and Pezzulo, 2015;Wilson and Golonka, 2013;Wilson, 2002): Cognition/perception and action do not rely on internal representations, but operate in a continuously interconnected perceptual-motor manner during object manipulation Perceptual information relating to objects can elicit a response without the need for intermediary cognitive representations or processes The body and motor system affect how we interact and interpret the environment, in that, perception of that environment is influenced by (or even depends on) action In strict embodiment views, no prior knowledge is requiredjust engagement with the environment (Hutto and Myin, 2013) Active inference (Friston, 2010(Friston, , 2011Friston et al., 2017;Adams et al., 2013a,b): The process of internal representation (i.e., learned or innate / reflexive mental modelling of action and its consequences in the world) supporting object manipulation involves the formation of sensory expectations, that are based on prior movement-outcome experience. These expectations or predictions facilitate action performance with minimal error Motor control emerges from active sensory inference (i.e., the ability to infer tactile-proprioceptive input, from optimal prediction) and the use of sensory error (e.g., stemming from cutaneous receptor activity) to drive appropriate action performance gripping during object manipulation requires and uses sensory controlrelated processes that contribute to grip regulation; for example, during writing, holding/moving objects, and when grasping objects at unfamiliar locations (Ebied et al., 2004;Nowak et al., 2004;Davare et al., 2019). Accordingly, it seems that both (predictive) feedforward and (sensory) feedback mechanisms are important for the ongoing updating of internal representations relating to; for example, force-load relationships and also for adjusting representations within current temporal parameters (Leib et al., 2015;Grover et al., 2019). Given the importance of sensory information in shaping internal representations and in providing feedback during object handling, the following section will explore hand-object interaction, with a specific focus on the mechanical events occurring at the finger-object interface. In particular, we examine the role of biological sensory-related aspects of grasping behaviour, initially exploring the contribution of vision and then examining how mechanical events at the finger-object interface are transduced via specialised skin receptors. Following this, we discuss biophysical and biomechanical parameters that aid our ability to sense information essential for the measurement and control of grasp security during object manipulation.

Biomechanical, biophysical, and biosensory-related aspects of grasping behaviour
Control of movement during object manipulation involves two key elements: (i) controlling the behaviour of the object, which involves, for example, understanding its weight and weight distribution, the forces it exerts on the human hand, and the appropriate grip force necessary to securely manage the load; and (ii) controlling the movement trajectory (of arm/hand/digits and object) so that the object can be manipulated in line with the current goal (Flanagan et al., 2003). Mastery of these two components is required for successful object handlinga complex endeavour given that the mechanics (i.e., the forces, stiffness, and frictions) associated with the manipulation task may vary substantially with time, and so, require continuous updating of the corresponding grip force (Flanagan et al., 2003). This section of the review examines how biological and biophysical phenomena occurring during object handling facilitate secure gripping during manipulation. We firstly explore how vision and tactile sensation support accurate gripping of an object.

Sensation -the role of vision in object handling
During voluntary gripping, we typically approach an object with a pre-shaped anticipatory grasp that is informed by previous experience and visual input (Jeannerod, 1997(Jeannerod, , 1984Cesanek and Domini, 2018). This anticipated grip type depends on how we intend to use the object we are reaching for. For example, a cylindrical object can be gripped using a power grip (i.e., clamping the fingers and thumb around an object and directing force towards the palm of hand) or a precision grip (i.e., using the pads of the distal phalanges of the thumb and index finger; Rosenbaum et al., 2012). Accordingly, initial hand-object interaction can be defined by a characteristic two-component process, that is, a reach phase and a grip formation phase that is initiated during the reach (Hoffmann et al., 2019;Jeannerod, 1984). The formation of grip type during reach corresponds to the object characteristics (e.g., size) and orientation (Jeannerod, 1997;Rosenbaum et al., 2012). Research findings support the relative independence of these reach and grasp components; for example, by demonstrating distinct movement times (Hoffmann et al., 2019) and distinguishable neural substrates and neuro-electrophysiological frequency bands (Sburlea and Müller-Putz, 2018) for these phases. The grasping component of object manipulation is not only informed by predictive feedforward signals based on internal models of previously experienced grasps (used to issue appropriate motor commands; Jeannerod, 1994), but importantly relies on visual input that provides essential information in relation to the physical properties of the object to be grasped; for example, shape, size, and orientation (Marneweck et al., 2018;Cesanek and Domini, 2018;Johansson and Flanagan, 2008).
Studies demonstrate that skilful grasping and manipulation rely heavily on vision to ensure an accurate and well-configured grip (see, e. g., Stone and Gonzalez, 2015;Rand et al., 2007;Mountcastle, 2005). For example, without vision, the grip aperture formed during the movement for object grasp is typically larger than necessary (Bozzacchi et al., 2018). Visual attention directed towards an object seems to facilitate synchronous grip-to-load force coupling (Takamuku and Gomi, 2019) because when visual feedback (e.g., seeing an object moving downwards) is incongruent with somatosensory feedback (e.g., from moving an object upwards) grip force lags behind load force (Toma et al., 2020). In this instance, there seems to be a predominant reliance on visual estimates of peak load force (based on prior gravitational knowledge) rather than on actual experienced load force (Toma et al., 2020). Interestingly, mere observation of others' grasp of objects of differing weights facilitates accurate grip-to-load force application during subsequent object lifts based on this prior visual perceptual experience (Reichelt et al., 2013).
So, vision often has a predominant role in movement over and above other senses, which can lead to phenomena such as feeling our hand where we see it (Botvinick and Cohen, 1998;Fourneret and Jeannerod, 1998;Jeannerod, 2006). For example, research manipulating visual feedback during a line-drawing task demonstrates that individuals correct their movements (i.e., deviate as necessary when drawing a line) in order to fulfil their intentions (e.g., draw a straight line), and that movement deviations can be based on non-veridical visual information (i.e., false visual feedback) which appears to override the more veridical feedback signals from actual movement (Fourneret and Jeannerod, 1998). Furthermore, visual cues facilitate an initial assessment of an object's dynamic properties before its manipulation (Toma et al., 2020). Similarly, visual information relating to object movement shapes the internal representation process so that predictions of object dynamics (e. g., expected load force peak) can be formed and guide the accurate implementation of grip force control (Toma et al., 2020).
Notwithstanding the apparent primacy of visual information for grip formation and during movement (Bozzacchi et al., 2018), congenitally blind individuals are able to use tactile and kinaesthtic information to appropriately scale grip according to the object being graspedthis capability is similar to that displayed by their sighted counterparts (e.g., Karl et al., 2012;Castiello et al., 1993). Furthermore, tactile information is especially important when dealing with ambiguous stimuli or contexts. Specifically, research demonstrates that when participants are required to report whether a vertical bar is longer or shorter than a horizontal bar when both appear to be the same size (but are not; i.e., the vertical-horizontal illusion), the confidence they place in their size estimation is greater with touch alone than with vision alone (Fairhurst et al., 2018). So, vision is only an indirect measure of external events because it is limited in its capacity to inform us about non-spatial dimensions (e.g., object weight or weight distribution). Alternatively, kinematic and tactile information can provide more direct measures of kinematics (e.g., motor programs adhering to kinematic laws) and dynamics/mechanics (e.g., forces, interfacial contact properties, environmental constraints) that are relevant to hand-object interaction (Pozzo et al., 2006). The next section will review the tactile sensory mechanisms that support secure object manipulation, and in particular, how specialised mechanoreceptors in the skin transduce surface mechanical events and aid tactile sensation.

Sensation -the role of tactile input in object handling
Over time, humans experience copious interactions with objects in the environment and this experience encompasses not only functional knowledge in relation to object use and environmental constraints, but information relating to mechanical forces; those that we exert on objects and those that are exerted on us (White, 2012; Johansson and Flanagan, 2009). These forces are sensed and mediated by tactile inputs from low-threshold (i.e., can respond to weak and non-damaging stimuli) cutaneous mechanoreceptors (activated in response to contact mechanics at the skin-object interface) and proprioceptive inputs (discussed in the next section) relating to receptors in joints and muscle-tendon complexes (that provide information about body position and self-motion; Toma et al., 2019;Olson et al., 2016;Macefield, 2005). Tactile sensations (arising in the peripheral nervous system) resulting from; for example, skin deformation and/or vibration, offer a direct sensory input (Barrea et al., 2018;Bensmaia and Hollins, 2003;Delhaye et al., 2012;Dzidek et al., 2017a). The cutaneous mechanoreceptor system is assumed to be the primary system for the initial (and ongoing) encoding of mechanical (force-related) events that provide human perceptual-cognitive-motor systems with fundamental information for the formation of appropriate internal representations during object gripping and manipulation (White, 2012). This section of the review outlines how human tactile sensation facilitates reaction to and perception of force-related information and contributes to controlled gripping behaviour during object manipulation. Mechanical-related tactile sensation is neurally encoded by three (or possibly four) types of cutaneous mechanoreceptor (located at the finger pads) which generate responses in associated afferent channels; transforming mechanical events into electrochemical impulses which are transmitted to the central nervous system (see Table 2 and Fig. 1; e.g., Li et al., 2011;Abraira and Ginty, 2013;Fleming and Luo, 2013;Kuroki et al., 2017;García-Mesa et al., 2017;Zimmerman et al., 2014;Paré et al., 2003;Macefield and Birznieks., 2008;Macefield, 2005;Birznieks et al., 2001;Vallbo and Johansson, 1978). These cutaneous mechanoreceptors are assumed to facilitate secure gripping during object manipulation by encoding salient force-related sensory events (e.g., indentation, curvature/edge information, skin movements, object movement/slip, etc.) and providing feedback for updating internal representations and/or for reactive (i.e., reflex-mediated) sensory control during ongoing object handling (Strzalkowski et al., 2016;Johansson and Flanagan, 2009;Westling and Johansson, 1987).
The afferent fibres innervating the mechanoreceptor end organs have relatively fast conduction velocities which allows them to quickly convey mechanically related information to higher processing centres (mean =~60 ms − 1 ; Abraira and Ginty, 2013;McGlone et al., 2014). These afferents form two broad (functional) categories, with classification determined by the pattern of neuronal firing to either transient or sustained skin deformation (i.e., their adaptation properties): (i) fast adapting units (FA), which react with burst of action potentials to the transient (or vibratory) skin deformation events at the start and end of stimulus contact but quickly decrease their responsiveness thereafter (thus, they are most sensitive to rate of change of mechanical events); and (ii) slowly adapting units (SA), which persistently produce action potentials throughout sustained skin deformation (thus, detecting indentation and/or stretch; Zimmerman et al., 2014;García-Piqueras et al., 2017;Abraira and Ginty, 2013;Johansson and Vallbo, 1979). In each category, the afferents and their associated end receptors are further classified according to the extent of their spatial receptive fields (i.e., the area around the receptor where mechanical stimulation can elicit a neuronal response), which is in turn determined by the depth at which the mechanoreceptor is located in the skin (see Table 2; Abraira and Ginty, 2013;Johansson and Vallbo, 1979). For instance, the mechanoreceptors innervated by Type-I afferents are located at a shallow depth near the boundary between the dermis and epidermis, hence having a smaller receptive field, whereas, the mechanoreceptors innervated by Type-II afferents are found deeper in the dermis, having a larger receptive field ( Table 2). The four main types of cutaneous mechanoreceptor purportedly involved in the encoding and transduction of mechanical forces at the finger pad and hand are denoted as the fast-adapting Meissner and Pacinian corpuscles and the slow-adapting Merkel complexes and Ruffini endings (to note, however, that the precise association between the Ruffini endings and the SA-II afferents is debated, particularly regarding whether all SA-II responses are triggered by these endings; Corniani and Saal, 2020; for individual details, see Table 2 and Fig. 1).
In general, the FA-I and SA-I end receptors are more numerous at the fingertip than finger pad (Johansson and Vallbo, 1979;Johnson, 2001), while the FA-II and SA-II end receptors are evenly distributed over the finger pad and fingertip (Corniani and Saal, 2020;Johansson and Vallbo, 1979;McGlone and Reilly, 2010;Roudaut et al., 2012;Macefield, 2005). Further, each cutaneous mechanoreceptor appears to be associated with a characteristic percept(s) (see Table 2; Johansson and Westling, 1987;Macefield, 2005).
Regarding the force-related mechanical events at the finger pad skin during object manipulation, the distinct activation rates, intensities, and patterns of individual mechanoreceptors may offer unique information (i.e., with each tuned to a specific tactile event) that is used in isolation (to other mechanoreceptor signals) to guide dexterous object manipulation. Conversely, the mechanoreceptors may act as an integrated ensemble at the population level, with each type contributing specific tactile information to generate a unified and comprehensive tactile percept (Kuroki et al., 2017;Saal and Bensmaia, 2014;Abraira and Ginty, 2013). Given the distinct nature of each mechanoreceptor's response to stimulation, it is likely that individual signals may be insufficient by themselves to encode the complexities of events at the finger-object interface. For example, FA-I end receptors are located within the dermal papillae (which manifest as fingerprint ridges) and are attached to the epidermis by collagen fibres. This makes them extremely sensitive to frictional forces when objects move across the skin creating vibration (Zimmerman et al., 2014;Roudaut et al., 2012). Further, even though these receptors have small receptive fields and respond in a largely uniform manner to stimuli within fields (resulting in poor spatial acuity), they are especially vigorous in responding to slip (Johnson, 2001). However, because the FA-I mechanoreceptors are putatively associated with a slip response rather than one associated with sustained stimulation, they can only partially signal incipient slip (i.e., slip evolution). Specifically, they will directly respond to transients at the propagating boundary between stuck and slipping skin but not to the stuck or contact area (for skin strain patterning and receptor activity, see Delhaye et al., 2016). Accordingly, because it is only the absence of responding in the contact area that indirectly informs the tactile sensory system, there is little detail in the information transduced in relation to this. So, SA-I receptors may complement the activity of FA-I receptors by responding to, and transducing, the (depth of and linear response to) sustained indentation over a range of indentations and so, these receptors can signal the precise region of the centre contact area (Johnson, 2001;Roudaut et al., 2012). This information, combined with the transient information provided by FA-I receptors, may facilitate responding, transduction, and eventual perception of slip evolution. A combined response from most of the mechanoreceptors would confer greater advantage for secure object manipulation (Abraira and Ginty, 2013).
In support of an integrated response, all mechanoreceptors have been found to respond to most mechanical events at the finger-object interface (Chalfie, 2009;Saal and Bensmaia, 2014), and although each mechanoreceptor has a vibration activation threshold and range (see Table 2), at suprathreshold levels all four mechanoreceptors respond to vibratory/dynamic events (McGlone et al., 2014). This suggests that mechanosensation is a synergistic process. Therefore, secure gripping may be facilitated through the integrated responses (i.e., neuronal firing rates) of individual mechanoreceptors; for example, via their overlapping spatial fields, which possibly heightens the intensity of the overall signal they create (Bensmaia, 2008;Graczyk et al., 2016). Comprehensive and salient signalling may facilitate deeper processing of tactile sensory inputs for both reactive (or error-based) control during object manipulation and for influencing higher-level processes (White, 2012).
Object manipulation is a dynamic process, during which the interaction forces between the object and the hand are typically in flux. As outlined in this section, we have developed a sophisticated mechanically sensitive tactile system for receiving, processing, and acting on information relating to physical hand-object interactions (Nowak et al., 2003). The next section will focus on detailing force-related events at the finger-object interface and how these influence skin deformation and strain patterning. The objective is to explore the key biophysical properties of the human fingers that influence grip adjustment and/or grip security.

Biophysical properties of the human fingers/hand that influence secure gripping
This section of the review discusses hand-object interaction and the important biophysical properties and behaviours of the finger pad skin which aid our ability to sense information essential for the measurement and control of grasp security during object manipulation. In particular, we discuss mechanical events and endogenous mechanisms that affect skin stiffness and friction between the skin and the object being manipulated. Humans use friction to securely grip an object, and so, this section will also explore how we feel friction, by considering events such as the genesis of slip.
An important biophysical property influencing our ability to grip securely is skin hydration, which influences skin stiffness, and thus, friction between the skin and the object being manipulated. With increasing skin stiffness (e.g., with increasing age), there is a corresponding decrease in the area of real contact (i.e., as opposed to the apparent contact area, the real contact area is the integral of all microscopic regions of contact between two surfaces and directly affects the friction strength of the interface; Weber et al., 2019;Dzidek et al., 2017a), which reduces the friction at the finger-object interface, and so, affects the capacity for secure gripping (Adams et al., 2013a,b;Leyva--Mendivil et al., 2017;Ben-David et al., 2010;Sahli et al., 2018;Delhaye et al., 2014;Arvidsson et al., 2017). To reduce skin stiffness, the finger pad skin relies on structural and biophysical properties, such as the functioning of the outer most layer of the skinthe stratum corneum, and the endogenous regulation of skin hydration (Filingeri and Ackerley, 2017;André et al., 2011;Adams et al., 2007;Pasumarty et al., 2011;Joodaki and Panzer, 2018). To explain, the stratum corneum of the finger pad is largely comprised of dead cells containing keratin filaments that give the skin a strong structure and act as a protective barrier (Mojumdar et al., 2017;Alonso et al., 1996). When the skin is dry, the keratin remains strong with little plasticity, resulting in a small real contact area (and hence friction) because the ability of the skin to deform and form an intimate contact with the object is hindered by the skin's stiffness. However, when the skin is wet it becomes more pliable and can deform to increase the real contact between skin and object, hence increasing the friction available for secure object manipulation (Mojumdar et al., 2017;Adams et al., 2007;Dzidek et al., 2017a). Although excessive hydration seems to undermine the structural integrity of the keratin composition, affecting object-induced skin deformation and sensing (Kandaswamy et al., 2019;Evans and Hyde, 2011;André et al., 2009;Tomlinson et al., 2011).
Experimental research provides evidence for the influence of keratin plasticisation on grip security, in that, dry skin contributes to grip force instability during object manipulation while very wet skin can trigger the use of excessive grip force (Kandaswamy et al., 2019;Filingeri and Ackerley, 2017;André et al., 2011;Alonso et al., 1996). Both dry skin and excessively wet skin increase the potential for an object to slip/drop (due to low friction at the skin-object interface), and so, the level of grip force must be increased (Tomlinson et al., 2011;André et al., 2010;Derler et al., 2009). Accordingly, endogenously regulated hydration is an important biophysical characteristic that moderates the effects of skin stiffness, that may arise; for example, with greater applied forces (Smith et al., 2002;Pawluk and Howe, 1999). Human studies demonstrate that occluded skin on the finger pad (i.e., skin fully in contact with a surface) naturally adjusts moisture levels over time through modulation of sweat (released from sweat glands on fingerprint ridges; Adams et al., 2007;Pasumarty et al., 2011;Dzidek et al., 2017b). So, humans have necessarily adapted to achieve optimal levels of moisture in the skin of the finger pads, and the fingerprint ridges appear to be especially important in this regard, due to their capacity to secrete moisture from the underlying sweat glands and possibly also by facilitating 'run-off' of excess moisture through the fingerprint valleys towards the edge of the contact area, much like the tread of a tire (Warman and Ennos, 2009). In this regard, a recent study demonstrates that fingerprints specifically contribute to moisture regulation by maintaining an optimal level of moisture in the valleys of the ridges, achieved through a balance of capillary evaporation and the blocking of sweat pores (i.e., enhancing evaporation of excess moisture from the valley sides while retaining moisture pools in the base of the valley; Yum et al., 2020).
Even though humans actively modulate the hydration of the finger pad skin to maximise the available friction, manipulation forces may still occur that threaten the security of the grasp. Therefore, the question arises as to what crucial mechanical indicators do humans use to inform them of the need for grip adjustment. In this regard, evidence suggests that force (frictional/tangential, inertial) and/or slip (partial/incipient) are essential cues (e.g., Damian et al., 2018;Adams et al., 2013a,b;Johansson and Westling, 1987;André et al., 2011;Khamis et al., 2018). For example, research demonstrates that friction at the finger-object interface guides the scaling of grip-to-load force, allowing humans to adapt their grip response within ~100 ms of object contact (Cadoret and Smith, 1996;Johansson and Westling, 1984). The friction at the interface will determine the forces at which slip will occur. Accordingly, understanding the relationship between interfacial parameters and frictional states is important.
Characteristic patterns of slip deformation are typically observed at the finger-object interface as the slip evolves from partial slip to full slip. Specifically, the skin at or near the periphery of the contact area (where there is a lesser normal stress and hence less friction) slips first and this slip pattern propagates inwards across the contact area until the entire contact is slipping at full (also called overt or gross) slip (André et al., 2011;Barrea et al., 2018;Delhaye et al., 2016;Gueorguiev et al., 2016;Adams et al., 2013a,b). This slippage of part of the contact area that precedes full slippage of the contact area is termed incipient slip. The threshold for and propagation of incipient slip is further influenced by biophysical factors, such as, finger (and object) curvature and softness (which creates a pressure differential across the contact area ;Fakhari et al., 2019). Given that incipient slip occurs over time, local skin deformation (stretch or vibration), may offer vital cues for rapid grip adjustment. Specifically, the evolution of slip on smooth surfaces has a duration of ~100 ms to ~300 ms (Gueorguiev et al., 2016) and because grip force can be adjusted within ~100 ms of object contact Cadoret and Smith, 1996), incipient slip offers a window to adjust grip force and secure the object load. However, in this regard, there may be some friction force (i.e., tangential force) threshold required to achieve sufficient skin deformation to provide relevant cues, because during passive touch conditions (i.e., where the finger is stroked by a plate), research demonstrates that perception of slip is poor under conditions of low tangential force and low friction where both contexts exhibit little skin deformation (Barrea et al., 2018;Johansson and Westling, 1984). Greater friction allows more stress/strain to develop in the skin when loaded with the object, and so, the transient event associated with the friction bonds being broken when slip finally occurs will be more dynamic, which may significantly influence slip detection and subsequent friction estimation.
An interesting feature of the finger pad skin is its 'ridge and valley' structure (i.e., the fingerprint) which assists its capacity to stretch and deform according to the forces acting upon it (Dzidek et al., 2017b;Jones and Lederman, 2006;Scheibert et al., 2009; but see Warman and Ennos, 2009). The fingerprint ridges and valleys are thought to contribute to the transduction of incipient slip. To explain, when incipient slip occurs between the skin and a rigid object surface, there must be shear deformation of the skin, and this deformation is encouraged by the heterogeneity of finger pad stiffness caused by the fingerprint structure. Specifically, the skin resists shear deformation along the direction of the ridges, but relatively freely deforms in the direction perpendicular to the ridges, with the ridges opening and closing like the folds of an accordion (Wang and Hayward, 2007;Delhaye et al., 2016). In fact, research demonstrates that the ridges tend to orientate perpendicular to the direction of object movement , further facilitating the incipient slip phenomenon, and possibly also our ability to feel it. So, the fingerprint ridges endow the skin with the ability to easily experience shear deformation. This is important because, as previously mentioned, we see that stiff finger pad skin has reduced capacity for skin deformation (and associated activation of sensory receptors in the skin) during incipient slip, and so, the available tactile information for higher processing is lessened and can result in weaker tactile perception (Abdouni et al., 2017).
Detecting shear deformation of the contact area would be a means of detecting incipient slip and subsequently triggering a gripping correction. Accordingly, three conceivable mechanisms for detecting incipient slip are, by sensing: (i) shear deformation transients (i.e., stretching or compression) at the propagating boundary between stuck and slipping skin; (ii) vibration caused by shear deformation or by the sliding contact between the skin and object; and/or by (iii) spatial patterns (e.g., textures) on the object translating across the skin. Notably, (i) and (ii) are similar in nature and may be differentiated by the spatial direction of the changes in skin strain (with vibration in any direction being considered), and by the frequency spectrum of the event. Similarly, (ii) and (iii) share some commonality, because a spatial pattern translating across the skin will create indentations normal to the skin surface that will also create vibration (higher frequency deformation of the skin) if the speed of translation is sufficiently fast and/or the spatial frequency sufficiently high. It should be noted, of course, that sensing vibration or translating spatial pattern requires that the object surface have either: (i) a nonsmooth texture such that there is sufficient indentation or shear deformation (catching) of the skin to create vibration or to discriminate and track a spatial pattern, or; (ii) have sufficient friction to induce intermittent stick-slip events, as can happen when sliding the finger pad across smooth glass, for example. Therefore, there is an expectation that incipient slip is most easily detected for a high-friction textured object; which is perhaps why ice is such a difficult material to manipulate without dropping. If the object is pliant, however, shear deformation on the contact area need not imply that slip is happening and could instead mean the object surface has also experienced the same shear deformation. In this scenario, sensing shear skin deformation would be insufficient to detect incipient slip. This would leave vibration or spatial pattern tracking as the main means of detecting incipient slip, necessitating that the object surface be sufficiently textured or have sufficient friction to induce the stick-slip phenomenon.
Object slip and grasp security are in part signalled by mechanical events occurring on the skin surface (e.g., patterns of skin deformation, vibration associated with shear deformation). This section has highlighted biophysical properties, such as the fingerprint structure and endogenously regulated skin hydration, that affect skin stiffness and friction between the skin and the object. In particular, the fingerprint structure allows the skin to stretch and deform which likely improves its tactile spatial acuity (Cramphorn et al., 2017). In this regard, finite element models demonstrate that without ridges there is a more uniform stress distribution across the skin surface during contact with objects (Maeno et al., 1998) and this may limit the extent or distinctiveness of skin patterning during object handling and weaken the associated tactile cues available for perceptual processing (Delhaye et al., 2014). Indeed, fingerprint ridges (and the underlying papillae that protrude from the dermal layer of skin that support them) may also play a crucial role in coupling the stress and strain distributions on the surface of the finger pad skin to the underlying mechanosensory system. Notwithstanding this, both the responses in the cutaneous mechanoreceptors beneath the finger skin surface and the biophysical properties associated with mechanical events on the skin surface are confined to providing local contact-specific information. To comprehensively understand dexterous object manipulation, it is also important to consider how humans adapt their gripping action to external forces; for example, during perturbation or spontaneous errors applied to multiple fingers and/or the hand. So, the next section will discuss biomechanical aspects of secure gripping, in particular, the appropriate scaling of grip force to object load, coordination between finger pad forces and digit position, and the contribution of kinaesthetic muscle afferent activity (e.g., signalling joint kinematics and torques) to the control of gripping behaviour.

Biomechanics and secure gripping
To secure the grasp throughout manipulation, we must continually adapt the grip and lifting forces exerted by the hand on the object in order to oppose or overcome the weight and inertial forces exerted by the object on the hand (Augurelle et al., 2003;Johansson and Westling, 1987;Turrell et al., 2001). As previously mentioned, initial grasp choice is thought to be informed by internal models of previously experienced grasps and current visual input. However, initial grasp choice does not guarantee a secure grip, particularly in unpredictable conditions (e.g., with unknown objects or where the regularity of a load and its weight distribution varies over time), because secure grasping requires control of magnitude, direction, and location of the applied grip force (Johansson and Westling, 1984;Li et al., 2013). Often continuous grasp adjustment is required during object handling in response to object sliding (Grover et al., 2019).
To avoid dropping an object, an appropriate grip force is applied by the fingers and hand according to the load force exerted by the object (Johansson and Westling, 1984;Johansson, 1996). Evidence indicates that humans typically achieve close to a minimal grip-to-load force ratio during object manipulationjust enough force to exceed that which is required to avoid the object slipping or falling (i.e., a safety margin of grip force; Johansson and Westling, 1987). It is interesting that there is a close temporal relationship between the grip force applied and the load force exerted by the object during its manipulation and this grip-to-load force coupling has been observed under various conditions; for example, in discrete grip and lift tasks (Johansson and Westling, 1984), during grasping of objects with different surface textures and under different friction conditions (Cadoret and Smith, 1996;Burstedt et al., 1999;Flanagan and Wing, 1997), when cyclically moving objects at different frequencies, even when an initial excessive grip force is imposed (Flanagan and Wing, 1993;Flanagan and Tresilian, 1994;White et al., 2005), with tangential torque (Kinoshita et al., 1997), and during gripping while jumping (Flanagan and Tresilian, 1994).
Notwithstanding these observations, research also demonstrates that the temporal characteristics of grip-to-load force coupling are not always strictly limited to the goal of maintaining a stable safety margin of grip force, and can instead be a function of task demands and/or the variability of the context (Grover et al., 2018;Hadjiosif and Smith, 2015). For example, studies show that grip force can be scaled according to the variability of load force dynamics (e.g., when a force field is randomly applied to the object), and that this uncertainty-driven, adaptive grip force is more sensitive (i.e., reduced safety margin for low variability and increased safety margin for high variability; Hadjiosif and Smith, 2015) and responsive (Grover et al., 2019) to the load force than when the load is more predictable. So, grip-to-load force coupling need not be continuous and proportional, but can be coordinated in an intermittent manner with phasic updating of grip-to-load force ratio as the context of the task changes (Grover et al., 2018). Such periodicity would be advantageous for motor control as it would reduce the inherent sensory noise and interference during movement (i. e., disturbance in sensorimotor signals resulting from movement; Faisal et al., 2008) and may provide instants of greater sensitivity in the system (Loram et al., 2011). So, an appropriately-controlled grip force during object manipulation may be informed by internal modelling of the variability of the context, and flexibly adjusted using sensory feedback mechanisms.
In addition to appropriate scaling of grip force to load force during finger-object interactions, the fingers/hand must also be able to counteract any unwanted mechanical effects, such as the displacement caused by the load force (Mizrahi, 2015). Controlling the effects of external disturbances or force perturbations on the hand during object manipulation in part depends on the mechanical impedance, or stiffness, of the fingers/hand (White et al., 2011;Fiorilla et al., 2011;Hajian and Howem, 1997). In this regard, typically finger stiffness automatically increases with an increase in grip force, and this stiffer finger posture provides stability during object grasping (Hasegawa and Kitamura, 2014). The extent of finger stiffness is influenced by muscle afferent activity, (i.e., muscle contraction, finger kinematics, and joint angles; Hajian & Howe, 1997;Crevecoeur et al., 2017). Additionally, humans can carefully calibrate and coordinate muscle stiffness across the digits and hand during object handling; for example, through cocontraction of digits (i.e., when antagonistic muscles simultaneously and synergistically function at a joint to alter finger stiffness; Höppner et al., 2017Höppner et al., , 2013Winges et al., 2007).
Regulation of finger/hand stiffness for motor control appears to be modulated by activity in the central nervous system (Kourtis et al., 2008;Hajian and Howem, 1997). Specifically, resisting sudden changes in load force requires rapid motor response, such as the kinaesthtic muscle activity (relating to position and stretch) observed at an onset latency of ~25− 50 ms (i.e., short latency reflex) or ~50− 105 ms (i.e., long latency reflex; Scott et al., 2015;Nashed et al., 2014;Kourtis et al., 2008;Pruszynski et al., 2008). Research demonstrates that while the short latency reflex is mediated by spinal processing, the long latency reflex can be modified in a task-dependent and goal-driven manner (Nashed et al., 2014;Pruszynski, 2014;Scott et al., 2015;Pruszynski et al., 2008). So, the long latency response seems to be a tuned response for accurate movement following mechanical perturbation rather than merely an automatically triggered response (as traditionally assumed; Pruszynski et al., 2008;Nashed et al., 2014). In support of this, evidence indicates that the long latency motor response is transcortical in nature with sensorimotor cortical activity observed immediately before the reflex response (Kourtis et al., 2008;Kurtzer, 2015). The contribution of higher-level processes in the scaling of finger grip force and stiffness is also evident in studies demonstrating delayed peak grip force (~65 ms after an expected external impact) when grasping a hand-held object during collision with an external force (White et al., 2011). In this regard, given typical increases in finger stiffness with increases in force, delaying peak force also delays peak stiffness and allows any destabilising (or potentially harmful) effects of the impact to be absorbed. Overall, actively adjusting muscle contraction or stiffness of the digits across the hand offers a control strategy for adapting to environmental change, particularly during unexpected perturbations, impulsive force, and/or when grasping objects with asymmetrical mass distributions (Mizrahi, 2015;Hasegawa and Kitamura, 2014;Winges et al., 2007).
A final consideration worth mentioning in relation to the accurate scaling of grip force during skilful object manipulation is the modulation of force according to finger position on the held object. In this regard, precise coordination between finger force and digit position (i.e., appropriate grasp configuration) is required to preserve object balance (Davare et al., 2019;Fu et al., 2010;Winges et al., 2009). Typically, humans optimise grip security and reduce the torque load through intelligent digit placement when forming the grasp; for example, by grasping an object near its centre of mass (particularly when using a precision grip; Glowania et al., 2017). Conversely, when finger-object interaction is more unpredictable, as is the case during unconstrained grasping or grasping objects with unapparent (according to visual input) asymmetrical mass distributions, studies demonstrate that grip force is modulated as a function of finger position to minimise torque (Fu et al., 2010;Parikh et al., 2020). Further, the coordination of finger force and finger position during object gripping appears to be regulated in part by planned grip force (particularly when visual information is available) and in part by sensed finger placement which facilitates any necessary adjustment of force when finger position is variable (Fu and Santello, 2014;Fu et al., 2010). The planning component of finger force-to-position scaling seems to rely on sensorimotor memory-based processes involving the primary motor cortex (Parikh et al., 2020). Correspondingly, the coordination of planned and online sensory feedback, as might be required during more unfamiliar gripping conditions, involves both the primary motor cortex and the somatosensory cortex (Parikh et al., 2020).
This section of the review emphasises that controlling the magnitude, direction, and location of grip force during object manipulation requires both tactile and proprioceptive (kinaesthetic) sensory systems (Crevecoeur et al., 2017). Overall, human dexterous object handling is influenced by the biophysical and biomechanical properties of the fingers and hand, such as the endogenous regulation of skin hydration, fingerprint structure, finger-specific and inter-digit positioning and mechanical impedance, and neuromuscular control. The following summary section will attempt to offer some tentative links between candidate biophysical/mechanical events and sensory systems that may explain how humans achieve grip security during object manipulation. Further, some potential areas of future research are suggested, that may assist in elucidating the causal mechanisms underlying dexterous manipulation.

Integrated summary and conclusions
Manipulation of objects with the fingers is a seemingly effortless every-day task, but on the contrary, it is indeed a complex sensorimotor process that involves a sophisticated interplay between sensory, motor and cognitive systems (Nowak et al., 2013). As previously recognised, skilful object manipulation involves two primary components, that is, control over the object being manipulated and control over the movement trajectory of the arm, hand and digits. Action control is crucial during object manipulation, and researchers suggest that a key informant of initial grasp choice and overall planning of the motor output during object manipulation (i.e., grasp and transportation) is a precise end-state (Rosenbaum et al., 2012(Rosenbaum et al., , 1996. For example, research exploring human kinematic manoeuvres during object reaching and grasping demonstrate that the finger contact points are functionally organised, in that, during grasping the fingers are configured according to an object's affordance (i.e., its spatial properties, such as shape, that indicate potential use and/or action) and the end-state of the immediate action goal (e.g., a goal to move a bottle to a different location or to pour from it; Sartori et al., 2011;Ambrosini et al., 2011;Rosenbaum et al., 2012). Such configuration avoids biomechanical discomfort and maximises ease and fit of the end-state posture (Jeannerod, 1999;Rosenbaum et al., 2012). Once acquired (during human development), these "optimising principles" appear to be characteristic of the motor system's everyday function and facilitate efficient motor configuration and execution (Jeannerod, 1999). Additionally, visual information relating to object shape provides manipulative affordance that can be integrated into motor programs and facilitate (e.g., reduce the time; inform grip configuration) reaching and grasping (Sartori et al., 2011;Gordon et al., 1991). However, visual information by itself does little to inform us of the non-spatial properties of an object, such as weight, and so, in this instance visual information combined with knowledge of object dynamics and tactile feedback will be most informative (Sartori et al., 2011).
This review has particularly focused on the role of force-related tactile information for successful dexterous object manipulation because force-related events at the finger-object interface continually modify the object's load profile and associated impressions in the skin, and so, at least in part, involve (reactional) sensory responses to update the corresponding grip force (Flanagan et al., 2003). Further, global frictional properties at the skin-object interface are generated according to current conditions (e.g., skin hydration, object material, etc.) and influence the extent and/or patterning of skin deformation (Messaoud et al., 2016;Dzidek et al., 2017a;Klaassen et al., 2017). In this regard, it is hypothesised that skin deformation and stretch (and/or associated vibrations) are key in offering the essential cues affording sensation and perception of object grasp security (Messaoud et al., 2016). For instance, smaller contact friction is associated with reduced skin deformation and perception of object slip (Barrea et al., 2018). However, mechanical events at the finger-object interface are complex because object properties, such as hardness, compliance, texture, and curvature exert mechanical change in the finger pad and alter the skin structure (e.g., ridge and valley fingerprint, keratin) and biophysical properties (e.g., viscoelasticity) of the finger pad skin, causing indentation, deformation/stretch, pressure differential, and/or vibration (Pruszynski et al., 2018;Delhaye et al., 2014;Derler et al., 2009;Levesque and Hayward, 2003;Yoshioka et al., 2007). So, although they provide spatiotemporal information (e.g., distribution of skin deformation; regular or intermittent vibration generation) that informs sensorimotor systems and purportedly facilitate secure grip (e.g., Hollins and Bensmaia, 2007), discerning the precise link between the mechanical events at the finger-object interface and the strategies used by humans during gripping is not trivial.
Securely gripping an object in everyday situations is typically not a passive event because we grip an object in order to use it, and this use necessitates adaption to dynamic skin-object interaction forces. Analysis of human biomechanical adaptation to, for example, slip evolution during object manipulation (to avoid dropping an object), offers some understanding of how humans adjust their gripping to maximise object safety. Specifically, research demonstrates that humans frequently employ adjustment strategies; for example, by redistributing normal forces between individual digits to limit or prevent object rotation (Gao et al., 2006) and/or by planning or adjusting digit configuration on the object to ensure grasp at the opposition axis where opposing grip forces are exerted (Stelmach et al., 1994;Iberall et al., 1986). Further, the preceding discussion of (bio)physical-mechanical and tactile sensory mechanisms at play during object gripping and manipulation suggests that the success that humans achieve in skilfully and securely manipulating objects might be attributable to particular mechanisms of control that alert the motor system to react (e.g., modifying grip force); specifically, information about the speed or course of object slip (i.e., slip evolution; e.g., providing temporal information that can calibrate reaction time) and information about the grip force prior to slip onset (Damian et al., 2018). These reactive control mechanisms are likely to also provide important information for updating internal models and facilitating predictive control in future similar scenarios (Johansson and Flanagan, 2009).
To causally associate specific neurobiological mechanisms or strategies with secure object manipulation requires further research because, for example, it remains unclear as to whether friction-related skin deformation (of which FA-I and SA-I tactile units seem to signal; Johansson and Flanagan, 2009) is fundamentally responsible for, firstly, triggering a specific neural tactile unit response, and secondly, informing us that the object is slipping. Regarding the latter, future behavioural-and psychophysical-level research might examine how the skin deformation patterns arising from different interfacial frictional states (i.e., those with low, medium, high coefficients of friction) influence or determine our ability to consciously perceive slip, and if they do, whether there are specific deformation patterns, intensities, or depths that facilitate this. This seems particularly important because initial related research findings indicate that we can perceive slip onset but that this detection is constrained by the level of friction at the finger-object interface (Barrea et al., 2018). Empirical investigation is also required to understand if it is necessary to consciously perceive slip in order to successfully react to it; the use of backward masking paradigms (i.e., where a stimulus influences action even though it is rendered consciously invisible via the presentation of a second stimulus immediately afterwardswith a latency of ~40− 80 ms) would be helpful in this regard, particularly because previous studies indicate that masked stimuli influence movement trajectory (e.g., Cressman et al., 2007).
Given the importance of skin behaviour in transduction and sensation of force-related information at the skin-object interface (e.g., with SA-I units apparently encoding continuous pressure-related skin indentation and FA-I units encoding transient motion across the skin), scrutiny of the precise skin deformation patterns occurring during the various stages of slip, such as, during secure object grasping, slip evolution (i.e., incipient slip of the object), and full slip of the object, will likely contribute to understanding the mechanisms that support secure object manipulation. In this regard, high-resolution visual imaging (and possibly simultaneous microneurographic recording from peripheral afferent nerves) may be useful for examining how humans use early deformation-related cues to reactively (i.e., sensory error-driven, as opposed to relying on predictive processing) adjust grip force. Finally, to comprehensively understand the role of skin deformation in sensing and reacting during object manipulation, it would be interesting to explore how limiting the capacity of the finger pad skin to deform, for example, by gluing the finger pad skin (e.g., fully and/or partially) or wearing gloves to prevent/limit skin deformation (e.g., previous research demonstrates that glove wearing reduces vibration perception ability; Carré et al., 2017), might influence gripping behaviour and the success or dexterity of manipulation. This is important given that skin stiffness is known to reduce discriminative tactile sensation (Abdouni et al., 2017;Skedung et al., 2013).
The importance of comprehensively understanding human gripping behaviour is emphasised by the increasing need for and use of biomimetic robotic and prosthetic tactile-related systems (i.e., systems inspired by human touch and proprioception); for example, in remote minimally invasive surgery, smart touch surfaces, virtual reality, and rehabilitation (Kappassov et al., 2015). As yet, however, while kinematic control in these system-types can be extremely accurate, their ability to precisely sense and assimilate tactile and force information remains somewhat crude, and so their capacity to operate in unstructured environments is limited (e.g., Touvet et al., 2012). A capacity to emulate human sensorimotor control in these systems could facilitate enhanced control of gripping behaviour through tactile and error feedback (Scibelli and Krans, 2016), and also offer more sensitive robot-human interaction (Kappassov et al., 2015).
In conclusion, whether human dexterous object manipulation primarily relies on a specific control strategy, such as, biomechanical adaptation (e.g., redistributing normal forces between individual digits; planned digit-object configuration to ensure grasp at the opposition axis), reactive control (e.g., detecting slip, via sensing incipient slip and/ or manipulation forces), or friction-generated skin deformation (and associated vibration), or, whether manipulation success relies primarily on accurate predictive control (i.e., anticipating action consequences) has yet to be fully determined. Further empirical investigation and evidence in this regard will also assist in arbitrating between the different theoretical views on human action control during manipulation (i.e., representational/predictive vs non-representational/sensorimotor; see Table 1). Given the control humans exhibit during object manipulation (adults rarely drop objects), it seems likely that some optimisation principles are at play and these increase efficiency (e.g., of manipulation manoeuvres) and accuracy (e.g., of the motor control system). The key challenge for researchers is to specify the fundamental mechanisms underlying secure gripping behaviour during object manipulation.

Declaration of Competing Interest
The authors declare no conflict of interest in conducting the research presented here.