Decoding grip type and action goal during the observation of reaching-grasping actions: A multivariate fMRI study

During execution and observation of reaching-grasping actions, the brain must encode, at the same time, the final action goal and the type of grip necessary to achieve it. Recently, it has been proposed that the Mirror Neuron System (MNS) is involved not only in coding the final goal of the observed action, but also the type of grip used to grasp the object. However, the specific contribution of the different areas of the MNS, at both cortical and subcortical level, in disentangling action goal and grip type is still unclear. Here, twenty human volunteers participated in an fMRI study in which they performed two tasks: a) observation of four different types of actions, consisting in reaching-to-grasp a box handle with two possible grips (precision, hook) and two possible goals (open, close); b) action execution, in which participants performed grasping actions similar to those presented during the observation task. A conjunction analysis revealed the presence of shared activated voxels for both action observation and execution within several cortical areas including dorsal and ventral premotor cortex, inferior and superior parietal cortex, intraparietal sulcus, primary somatosensory cortex, and cerebellar lobules VI and VIII. ROI analyses showed a main effect for grip type in several premotor and parietal areas and cerebellar lobule VI, with higher BOLD activation during observation of precision vs hook actions. A grip x goal interaction was also present in the left inferior parietal cortex, with higher BOLD activity during precision-to-close actions. A multivariate pattern analysis (MVPA) revealed a significant accuracy for the grip model in all ROIs, while for the action goal model, significant accuracy was observed only for left inferior parietal cortex ROI. These findings indicate that a large network involving cortical and cerebellar areas is involved in the processing of type of grip, while final action goal appears to be mainly processed within the inferior parietal region, suggesting a differential contribution of the areas activated in this study.


Introduction
The goal of an action (e.g., drinking from a glass) is achieved through a fluent sequence of motor acts, each characterized by its own sub-goal (e.g., reaching-grasping act for taking possession of a glass, bringing the glass to the mouth and then grasping with the mouth) ( Jeannerod et al., 1995 ;Rizzolatti et al., 2014 ). In this framework, when an individual has selected the final goal of a reaching-grasping action, its implementation requires programming the various types of movements composing each motor act, including both kinematic parameters (such as trajectory, speed, acceleration, amplitude) and the type of grip most suitable to interact with the object ( Grafton and Hamilton, 2007 ). This implies that, in order to drive behavior, the brain must represent, at the same time, all these factors.
Neurophysiological studies in monkeys demonstrated that grasping an object based on visual information requires first of all the transformation of object features in the type of grip and wrist orientation most the final goal of the action and the specific grip used to achieve it can interact at single neuron level. Bonini and colleagues (2012) directly addressed this issue recording neuronal activity from monkey PFG and F5 during the execution of simple grasp-to-eat and grasp-to-place natural actions, each performed with different grip types. The authors showed that most neurons in both areas are selective for grip type, but the discharge of many of them, particularly in PFG, appears to differentiate the final goal of the action, suggesting the relevance of this parietal area for the integration of multiple information regarding the action to be performed.
The neural elaboration of the action final goal and grip type, as well as their interaction, are also important during the observation of actions performed by another individual. Single neurons and neuroimaging data demonstrated that the Mirror Neuron System (MNS) is involved in visuomotor transformations which allow the observer to understand an observed action by matching it onto her/his own motor representation ( Rizzolatti et al., 2014 ). The initial studies in monkeys Gallese et al., 1996 ;Rozzi et al., 2008 ) showed that mirror neurons are present in F5 and PFG. Neurons with mirror properties have subsequently been described within a network of interconnected areas including AIP ( Lanzilotto et al., 2019 ;Maeda et al., 2015 ;Pani et al., 2014 ), PMd ( Papadourakis and Raos, 2019 ;Tkach et al., 2007 ), and the mesial frontal cortex (pre-SMA) ( Albertini et al., 2020 ;Lanzilotto et al., 2016 ;Yoshida et al., 2011 ).
The existence of a comparable action observation/execution system in humans is now well established, and it is homologous to that found in monkeys ( Molenberghs et al., 2012a ). This system is mainly constituted by inferior parietal cortex (IPL) (both convexity and intraparietal sulcus (IPS)) and PMv, plus the caudal part of the inferior frontal gyrus (IFG) ( Caspers et al., 2010 ;Hardwick et al., 2018 ). Recently, neuroimaging studies reported that other cortical and subcortical areas, such as PMd, superior parietal lobule (SPL) ( Filimon et al., 2007 ;Gazzola and Keysers, 2009 ) and cerebellar lobules VI and VIII ( Abdelgabar et al., 2019 ;Errante and Fogassi, 2020 ;Gazzola and Keysers, 2009 ), are consistently recruited during both execution and observation of reaching and grasping actions, thus suggesting their involvement within an extended MNS.
The most important property of the MNS is that of coding the goal of observed motor acts ( Gallese et al., 1996 ;Umiltà et al., 2008 ) and the final action goal ( Bonini et al., 2010 ;. According to the definition explained above regarding action execution, the two types of coding are different. The former is referred to one specific fraction of the action, having its own sub-goal, while the latter refers to the final achievement of the whole motor goal of the entire action, that coincides with the agent's motor intention. The coding of action goal has been confirmed by human studies ( Gazzola et al., 2007a ;Shimada, 2010 ) on observation of actions performed by human vs. artificial agent (e.g. robotic arms). Further evidence about the recruitment of the MNS for the processing of action goal derives from the study of aplasic patients (individuals born without arms and hands) observing actions performed with the hands and executing actions with the mouth and the foot ( Gazzola et al., 2007b ).
On the other hand, the MNS is also involved in the elaboration of other features of the observed action, such as type of grip ( Grafton and Hamilton, 2007 ) and other subtle kinematic features. For example, Casile et al. (2010) presented people with videos of rotational arm movements and found that, relative to movements that violated the two-thirds power law, those that complied with it induced greater activation in left premotor and dorsofrontal regions. This implies that the MNS can be involved, at the same time, in the processing of the final action goal, the type of used grip and kinematic parameters, depending on the action context. The facts that action observation generally recruits not only areas of the ventral parieto-frontal circuit but also some areas belonging to the dorsal circuit (PMd, SPL) gave new insights about the role of these circuits, originally described as distinct modules for reaching and grasping, respectively ( Jeannerod et al., 1995 ;Rizzolatti and Matelli, 2003 ). In fact, it has been recently suggested that both circuits in both humans and monkeys can be involved in processing different aspects of reachinggrasping actions ( Grol et al., 2007 ;Nelissen et al., 2018 ;van Polanen and Davare, 2015 ).
The majority of fMRI studies on action observation was not focused on the contribution of specific areas of the MNS in decoding grip and action goal. To our knowledge, only one fMRI experiment ( Hamilton and Grafton, 2008 ) used repetition-suppression technique (RS) to distinguish the areas involved in the processing of action outcome vs type of grip. RS for repeated outcome was observed in the right hemisphere in both IFG and IPL, extending into the anterior IPS. Conversely, RS for repeated grip was shown in left middle IPS and STS, although this finding did not reach significance.
Recent advances in fMRI data analysis allowed to investigate more specifically the properties of the cortical areas involved in the execution and observation of reaching-grasping actions Filimon et al., 2015 ;Koul et al., 2018 ;Molenberghs et al., 2012b ;Nelissen et al., 2018 ) by adopting Multivariate Pattern Analysis (MVPA) based on machine learning algorithms ( Pereira et al., 2009 ). MVPA allows to detect subtle pattern differences, extracting the signal associated to a specific experimental condition, by considering the pattern of response across multiple voxels ( Haxby, 2012 ;Norman et al., 2006 ). This approach can be useful for investigating differences in the activated pattern within MNS areas during action observation when univariate approach on averaged activation does not reveal specific differences ( Mur et al., 2009 ).
In the present fMRI study, healthy participants were required to observe reaching-grasping actions performed with different goals and grips. In order to investigate MNS activations, subjects were also required to perform a motor task consisting in the execution of the same actions presented during the observation task. The main aim was to investigate cortical and cerebellar activations that are critical for the coding of grip and final goal of the observed action. This aim was firstly addressed using a univariate approach consisting of: (a) direct contrast between observation conditions; (b) conjunction analysis between activations elicited by observation and execution tasks; (c) Region of Interest (ROI) analysis carried out on the areas revealed by the conjunction analysis. Secondly, MVPA was performed to investigate different patterns of activity within MNS areas evoked by the observation of actions characterized by different goals and grips. Based on relevant previous literature, we hypothesize that: (a) action observation and execution elicit shared activation of the cortical parieto-premotor circuits, as well as the motor sectors of the lateral cerebellum; (b) grip type may be processed within an extended network of cortical and subcortical areas belonging to the MNS; (c) final action goal is coded within the main nodes of the MNS, including inferior parietal and ventral premotor areas.

Participants
Twenty human volunteers (11 females; mean age 24.6 years; range 18-27 years) participated in the study and were recruited from the University of Parma (Parma, IT). All subjects had normal or corrected-tonormal vision and were financially compensated for their participation. Only healthy subjects were recruited, with no history of neurological, orthopedic or rheumatological disorders, and no drug or alcohol abuse. All participants were right-handed according to the Edinburgh Handedness Inventory ( Oldfield, 1971 ). Four participants (2 females) were subsequently excluded from data analysis: two did not complete the experimental session, and two presented excessive head motion. Movements during scanning motion were detected on the basis of the three translation and rotation parameters resulting from 3D motion correction (cut-off criterion: < 2 mm for translation, < 2 °for rotation). Overall, 16 participants were included in the successive analyses. Informed consent was obtained in accordance with ethical standards set out by the Declaration of Helsinki and with the guidelines for scientific research of the University of Parma (IT). The study was approved by the local ethics committee (Comitato Etico per Parma, University of Parma; code UNIPRMR750v1).

fMRI experimental design
The study was performed during a single imaging session, acquired in six runs, while the participants performed two tasks: (a) observation of visual stimuli consisting of reaching-grasping actions, such as grasp of a handle with different grips to open or to close a small box (run 1, 2, 3 and 4); (b) action execution, in which participants had to perform grasping actions similar to those presented during the observation task, i.e. to open or to close a box with two different grips (run 5 and 6). The rationale for the use of a different number of observation/execution runs was that a higher BOLD signal within areas belonging to the parietopremotor MNS has been consistently reported during motor tasks, as compared to passive observation. For this reason, we increased the number of trials for each observation condition ( N = 72) as compared to execution trials ( N = 30). The duration of the whole observation task required about 20 min, subdivided into four runs, lasting 5 min each, also in order to maintain subject's attention. The execution task lasted about 16 min, subdivided in two runs of about 8 min each. The presentation order of the observation/execution runs was balanced across participants. Half of participants started with the observation runs, followed by the execution session, while the remaining participants started with the execution session. Before starting the imaging session, participants underwent a brief training outside the scanner, lasting about 15 min., which allowed them to familiarize with the MR system and with the experimental procedure. During the training, participants were also presented with the setting and the instructions about the tasks to be performed during the fMRI session.

Visual stimuli and conditions
The visual stimuli consisted of video clips showing human actors performing four different types of actions, consisting of reaching and grasping a handle with 2 possible grips (Grip Level: Hook, Precision ) and 2 possible goals (Goal level: Open, Close ). Thus, the resulted actions included the following conditions: a) grasping the handle with hook grip to open the box ( Hook_Open ); b) grasping the handle with hook grip to close the box ( Hook_Close ); c) grasping the handle with precision grip to open the box ( Precision_Open ); d) grasping the handle with precision grip to close the box ( Precision_Close ); (see Fig. 1 A).
The observation of the static initial frame of each clip, lasting 2 s, was used as control condition (Ctrl). All actions were videorecorded both from a subjective perspective, in order to create the visual stimuli to be used during the fMRI acquisition, and from a lateral perspective (90°a ngle), to investigate the kinematic features of the four different types of actions by means of 2D kinematic analysis. A total of 80 videos, 20 videos per condition (duration 2s), were acquired in a lit environment by means of a digital HD camera (© GoPro, Inc., USA), with a frame rate of 100/second and resolution of 1280 × 720p. All videos measured 16°x 17.5°visual degrees. Ten repetitions of the same action presented in each condition was performed by 2 actors (male, female). This ensured some variation in the agent, and some variability in movement execution among different trials, while keeping object, grip and final goal constant.

Kinematic features of stimuli
In order to capture slight changes in kinematic features of the recorded actions to be presented in the action observation task, a 2D kinematic analysis was performed on the video stimuli. A tracking software (© Tracker v5.1.2, 2019, Douglas Brown) was used to measure the movement trajectory and velocity, by marking specific points consisting of colored spheres (ø 0.5cm) placed on the tip of the actor's right index finger, thumb and wrist. Using this arrangement, it was possible to calculate grip aperture (cm), by measuring the distance between the two markers on index and thumb, and wrist velocity.
The point of origin of the X/Y axes was identified as the start position of the actor's hand. To trace the markers, the auto-tracker function implemented in Tracker software was used. This procedure compares the template image of the feature of interest, in this case the two markers, by searching frame by frame the best match with the template. In order to achieve a better tracking, a point of mass was created in the centre of the marker using as tracking parameters an evolution rate of 20% and an auto-mark value of 4 (min/max range 1-10), reducing the probability of drifts in the template and false matches. Using these parameters, it was possible to trace marker's position in the space every 10 ms until the end of the action. The end of Close actions was considered as the contact time between the handle and the box, while that of Open actions was considered as the achievement of ∼10°angle of aperture of the lid. A line of 10 cm drawn on the side of the apparatus was used as reference measure for software calibration. The calibration was computed by scaling the real distance measured in cm to the image distance expressed in pixels.
Trajectory and grip aperture were calculated by using the coordinates of both points of mass on the x and y axis ( Fig. 1 B). Wrist velocity was calculated using a finite difference algorithm where the value between brackets refers to the step number and dt is the time between two consecutive steps calculated in seconds. The velocity module accounts both for the x and y components by calculating the combination of the two vector values, expressed in cm/s.
In order to account for the noise in the recorded data, values were averaged and smoothed by using a gaussian-weighted moving average filter included in Matlab R2020a (The Mathworks, Inc., Natick, MA, USA). Mean velocity and trajectory data were interpolated and plotted over movement time percentage allowing the comparison between trials that had slightly different durations. Details about the main kinematic features of the stimuli are reported in Supplementary Tables 1 and 2.

Observation task procedure
Participants laid supine in the bore of the scanner in a dimly lit environment. Visual stimuli were presented by means of a digital goggles system (Resonance Technology, Northridge, CA) (60 Hz refresh rate) with a resolution of 800 horizontal pixels x 600 vertical pixels with horizontal eye field of 30°. Digital signal transmission to the scanner was via optic fiber. Sound-attenuating (30 dB) headphones were used to muffle scanner noise. Each of the four observation runs was acquired using a block paradigm. Each block lasted 14 s and it was composed of 6 videos in a row of the same condition, interspersed with an inter-stimulus interval of 400 ms ( Fig. 1 C). During a typical observation run, a total of 15 blocks of stimuli were presented, 3 blocks for each experimental and control condition. The order of blocks was counterbalanced across subjects. Thus, the entire imaging session consisted of a total of 60 blocks, 12 blocks (corresponding to 72 trials) for each experimental and control condition.
Blocks of stimuli were interleaved by a fixation no-videoclip event (rest) lasting 8, 10 or 12 s, used as baseline, in which participants had to fixate a white cross presented in the middle of a black screen. The fixation cross was maintained also during blocks presentation, in order to keep subject's fixation. The investigator visually checked subject's performance, in order to exclude confounding effects due to hand movements during the observation task. Software E-Prime 2 Professional (Psychology Software Tools, Inc.; http://www.pstnet.com ) was used both for stimulus presentation and for recording of participant response to catch trials.

Control Test for Task Attention
In order to ensure that participants attended to the visual stimuli, in 20% of blocks, after viewing 2, 4 or 6 stimuli in a block, a catch trial was presented, and they had to provide an explicit response, using a response pad positioned on the abdomen. For each catch-trial, two simple faces (male/female) were presented on the screen, together with a question asking participants to indicate the gender identity of the actor observed in the last video clip (male/female). The catch-trials (lasting 2 s each) were followed by a 12 s rest period to remove movements-related artefacts ( Fig. 1 C). A behavioral analysis was performed on the basis of the responses given by the participants during catch trials presentation. For each participant, 12 responses were recorded in the observation session. The mean responses accuracy of participants was 96.7% (SD ± 7.09%).

Experimental setting
In two separate runs of the same imaging session, subjects performed a motor task, aimed to investigate MNS activation, in which they were required to perform the same type of reaching-grasping actions presented during the observation task. Stimuli used for the motor task were presented by a metal-free apparatus, that allowed the presentation of real 3D stimuli to participants lying supine in the scanner. The apparatus was composed of a turntable (diameter 60 cm) with different compartments, which could be turned around its central axis ( Fig. 1 D). It was mounted on a support with adjustable height. In this study only 4 out of 6 compartments were used, corresponding to the four experimental conditions. Each compartment was separated by a partition, so that only one compartment at a time could be seen by the participant. Participant's head was tilted at an angle of ∼20°and supported by a foam pad allowing direct viewing of the stimulus without using mirrors, also in order to avoid additional visual transformations. The apparatus was placed at a natural reaching distance ( ∼15 cm) above the participant's pelvis for avoiding further movements of the upper part of the trunk. The right arm of the participants was placed on a cushion and fixated with a belt to allow easy access to the apparatus and, at the same time, to prevent involuntary movements of the arm and the shoulder. Stimuli consisted of similar boxes (dimensions: 5 × 5 × 5 cm) presented in the Observation task ( Fig. 1 A).

Motor task procedure
During the two Execution runs, participants were instructed to perform four different types of action, similar to those shown in the videos during the Observation task, using the objects mounted on the compartments of the apparatus ( Fig. 1 E). The experimenter was present inside the magnet room during the entire execution session, near to the scanner, in order to change the stimuli between different blocks of trials, by rotating the device. Instructions about the timing of each block and the type of experimental condition was provided to the experimenter by means of digital goggles system, that presented the written instruction indicating him the next stimulus to be presented, while MR-compatible headphones were used to give instructions to subjects. The hand starting position was on the subject's abdomen. Each reaching-grasping action started from the same position and terminated in the same final position. The block sequence was as follows. The experimenter rotated the device (4 s), presenting the stimulus, corresponding to one condition, in a central position. During the rotation, the participant had to remain in a rest position, then she/he had to fixate the object for 2 s, in order to exclude possible confounding effects due to movement preparation during object fixation. Then, subjects were given an auditory cue (beep sound, 3 s, 400Hz), instructing them to immediately execute the planned action, corresponding to the specific condition (3 s) (see the section Action Observation task for a description of the four types of actions). Following the presentation of the cue, after 3 s the sound was turned off, instructing the subject to return the hand to its starting position (within 3 s). Thus, another trial began with the presentation of a second cue instructing participants to repeat the same action. During each block the participants performed 3 trials belonging to the same experimental condition. A baseline period (16 s) in which participants had to remain still with the eyes open was interleaved between two subsequent blocks. During this period participants could directly see both their hand and the apparatus. The duration of each block was 24 s. A typical execution run was composed by 20 blocks (5 blocks each condition). The participants performed a total of 15 motor trials for each of the four conditions.
2.6. fMRI data analysis 2.6.1. Data preprocessing Data processing was performed with SPM12 (Wellcome Department of Imaging Neuroscience, University College, London, UK; http://www.fil.ion.ucl.ac.uk/spm ) running on MATLAB R2018a (The Mathworks, Inc.). Structural images were centered and reoriented with functional images to the anterior-posterior commissure axis. The first four EPI volumes of each functional run were discarded to allow the magnetization to reach a steady state. For each subject, all volumes were slice timing corrected, spatially realigned to the first volume of the first functional run and un-warped to correct for between-scan motion. Motion parameters were used as predictors of no-interest in the model to account for translation and rotation along the three possible dimensions as determined during the realignment procedure. The cut-off used for motion correction tolerance was the size of the voxel. If motion exceeded this measure in translation and/or rotation, the full dataset of the specific subject was not included in the analysis. T1-weighted image was segmented into grey, white and cerebrospinal fluid and spatially normalized to the Montreal Neurological Institute (MNI) space. Spatial transformation derived from this segmentation was then applied to the realigned EPIs for normalization and re-sampled in 2 × 2 × 2 mm 3 voxels using trilinear interpolation in space. For the normalization of cerebellar data, the T1-weighted images were deformed to fit the SUIT template of the human cerebellum using the SUIT toolbox ( Diedrichsen et al., 2009 ) for SPM12 ( http://www.diedrichsenlab.org/imaging/suit.htm ). The toolbox allows to isolate the cerebellum and creates a mask. For each participant, the mask was manually corrected. Non-linear deformation was then applied to each contrast image. All functional volumes were then spatially smoothed with a 8-mm full-width half-maximum isotropic Gaussian kernel (FWHM).

Univariate statistical analysis
Data were analyzed using a random-effects model ( Friston et al., 1999 ), implemented in a two-level procedure. In the first level, singlesubject fMRI responses were modeled using two different General Linear Models (GLM), one for the observation and one for the execution task. The design-matrix of the first GLM included the onsets and durations of each experimental and control condition, plus the response to catch-trials (Obs_ Hook_Open, Obs_ Hook_Close, Obs_ Precision_Open, Obs_ Precision_Close, Ctrl and Response ), six predictors obtained from the motion correction in the realignment process to account for voxel intensity variations caused by head-movement, and one constant regressor per run. All predictors, except for Response , included the 6 consecutive videos, which were modelled as one single epoch lasting 14 s. Catchtrials were modelled as consecutive blocks, lasting 14 s each, including the effective response time (2 s) and a signal-denoising period (12 s) to separate the motor component from subsequent processing. Contrasts derived from parameter estimates were calculated and entered into a flexible factorial within-subjects analysis of variance (ANOVA). Specific effects were tested using t statistical parametric maps (SPMt), with degrees of freedom corrected for non-sphericity at each voxel. Data corresponding to action Execution task were entered in a second GLM model with six predictors ( Device Rotation, Planning Phase , Exe_ Hook_Open, Exe_Hook_Close, Exe_Precision_Open, Exe_Precision_Close ), convolved with the HRF.
In the second level group-analysis, corresponding t -contrast images of the first-level models were entered in a flexible ANOVA with sphericity-correction for repeated measures ( Friston et al., 2002 ). Within this model, we also assessed the activations resulting from the direct contrasts between observation conditions vs Ctrl ( Obs_Hook_Open vs Ctrl, Obs_Hook_Close vs Ctrl, Obs_Precision_Open vs Ctrl, Obs_Precision_Close vs Ctrl , and all reverse contrasts). Finally, we computed direct contrasts between conditions. These contrast analyses were entered in the subsequent conjunction analysis ( Friston et al., 2005 ), performed to highlight cortical and cerebellar regions involved in both action observation (vs Ctrl ) and execution ( Obs&Exe_Hook_Open, Obs&Exe_Hook_Close, Obs&Exe_Precision_Open, Obs&Exe_Precision_Close ). Statistical inference was drawn at the cluster level, with a threshold of P < 0.001 corrected for multiple comparisons using Family-Wise Error correction (FWE). Local maxima of activations are presented in the stereotaxic space of the MNI coordinate system. Activations were also localized with reference to cytoarchitectonic probabilistic maps of the human brain, using the SPM-Anatomy toolbox v1.7 ( Eickhoff et al., 2005 ).

ROI analysis
In order to investigate possible differences between BOLD activations during the four observation conditions in selected cortical and cerebellar areas, we performed a Region of Interest (ROI) analysis by selecting ROIs that correspond to areas reported in the literature as part of the MNS ( Gazzola and Keysers, 2009 ;Molenberghs et al., 2012a ). To this aim, we defined the ROIs starting from the group level results of the conjunction analysis, across the four experimental conditions, at a threshold of P < 0.001, FWE corrected at cluster level. This allowed us to identify 6 cortical areas in the left hemisphere and 2 cerebellar areas, activated for both tasks. To avoid any circularity issue in ROIs localization, the center of each ROI was determined using also an anatomical approach. Starting from the anatomical reference (MNI coordinates) of the maximum probability peak of the specific area, reported in the standard probabilistic cytoarchitectonic maps included in SPM Anatomy toolbox, spherical ROI masks (5 mm radius) were created, using MarsBar. The analysis included 6 ROIs defined at cortical level in the left hemisphere: ROI_1) Left Area 44 (x = -53, y = + 7, z = + 22) according to Amunts et al. (1999) , that also includes PMv; ROI_2) Left PMd (x = -26, y = -8, z = + 60) defined according to the anatomical study of Geyer (2004 ), including not only the PMd cortex, laterally, but also part of the SMA and the pre-SMA, medially; ROI_3) Left IPL (x = -58, y = -44, z = + 40) defined according to the anatomical studies of Caspers et al. (2008Caspers et al. ( , 2006; ROI_4) Left IPS (x = -32, y = -59, z =+ 51), labeled as Areas hIP2/hIP3 according to anatomical studies by Choi et al. (2006) and Scheperjans et al. (2008) ; ROI_5) Left SI (x = -40, y = -30, z =+ 60) labeled as Area 1, according to the studies by Geyer et al. (2000Geyer et al. ( , 1999; ROI_6) Left SPL (x = -20, y = -67, z = + 63) labeled as Area 7A in the SPM Anatomy toolbox according to the study by Scheperjans et al. (2008) .
As control, BOLD signal was assessed also in two ROIs, one selected at cortical level, in the left Middle Temporal Gyrus (MTG) (ROI_9) and one in the deep white matter (WM) of the left hemisphere (ROI_10) (See Suppl. Table 3 for details about ROI' MNI coordinates). We included MTG as a control in order to confirm that there were no significant differences due to grip or goal decoding in an area not belonging to the MNS, though emerging from conjunction.
We defined a sphere using MarsBaR software for SPM ( http://marsbar.sourceforge.net/ ), with maximum cluster size of 5 mm radius, within each anatomically defined region. Then, we extracted separately in each ROI the average BOLD signal change across all significant voxels using the SPM Rex Toolbox ( http://web.mit.edu/swg/rex ). All subjects showed significant activations in the ROIs considered for the analyses. Percent Signal change within each ROI was compared between the four experimental conditions presented during the observation task using a 2 × 2 analysis of variance (ANOVA) with Grip and Goal as repeated measures factors. To investigate significant differences, post-hoc comparisons were computed by using paired-sample t -tests with Bonferroni correction for multiple comparisons.

Multivariate pattern analysis
Multivariate pattern recognition analyses (MVPA) applied to neuroimaging data use brain images as spatial patterns allowing the identification of properties that may not be noticeable using a mass-univariate approach by jointly analyzing data from individual voxels within a region. In order to detect subtle information that could be spatially distributed over brain, we conducted MVPA on un-smoothed normalized T2 * functional brain images acquired during the observation task, computing two binary classification models: the first refers to grip type ( precision vs hook ), the second to action goal ( open vs close ). Classifier algorithms consider the MVPA as input, that is a feature vector consisting of the value of each voxel ( features ), and the categorical labels corresponding to each experimental condition. After training and testing the model, which consists of applying a trained model to the tested set of data, the classifier returns a predicted label for different brain patterns. For this purpose, we utilized Pattern Recognition for Neuroimaging Toolbox (PRoNTo v.2.1; Schrouff et al., 2013 ), a MATLAB (The MathWorks Inc.) based toolbox.
In order to compute each model, the experimental design elements, that is conditions or labels , onsets, duration of each block, number of the latter and interscan interval (2 s) were manually specified. Successively, the un-smoothed normalized T2 * functional brain images belonging to the experimental conditions for each subject were selected and then a first level mask was applied including only voxels containing relevant features and discarding those with non-relevant information, i.e., voxels outside the brain. Afterwards, a similarity matrix was computed using a linear kernel included in PRoNTo toolbox extracting each voxel value from each image, computing the feature vector. The kernel function, by calculating the dot product of each feature in pairs also called kernel trick , returns a value characterizing the similarity between each pair, creating a kernel matrix of the feature space . This is a real vector space that contains the feature vectors. Since fMRI data represent continuous temporal series, a polynomial detrending was applied.
Both models were calculated using second level masks for the same areas included in the univariate ROI analysis. In order to access to widely distributed information by jointly analyzing multiple voxels belonging to the same anatomical area as revealed by conjunction, we used larger masks. ROI masks for multivariate analysis were created using the AAL atlas ( Tzourio-Mazoyer et al., 2002 ), Brodmann Area Atlas, and Talaraich Daemon database atlas ( Lancaster et al., 2000 ), all provided by Wake Forest University PickAtlas (WFU PickAtlas; https://www.nitrc.org/projects/wfu_pickatlas/ ; Maldjian et al., 2003 ) plus the Human Motor Area Template (HMAT; www.http://lrnlab.org; Mayka et al., 2006 ) Table 3.
Similarly to univariate analysis, also in MVPA four ROIs were selected in the right hemisphere, including: (a) Right PMd, (b) Right IPL, (c) Right IPS, (d) Right SPL (Suppl. Table 4). Also these ROI masks were created using the same atlases described above for the left hemisphere ROIs. As control, a ROI was built in the right MTG, as in the univariate analysis. The ROIs location was exactly specular to the main ROIs selected in the left hemisphere.
Later, a classification model was computed using a binary Support Vector Machine algorithm (SVM). It consists of a classification algorithm that computes a hyperplane that splits the feature space, treated as if it was multidimensional thanks to the kernel trick, maximizing the margin that separates points belonging to the two classes, allowing for a certain degree of misclassification. The similarity matrix previously computed was then provided to the SVM classifier, which extracted weight vectors running perpendicularly to the hyperplane. The resulting values determining the decision boundaries are called "support vectors ". P recision was selected as class number 1 of binary classification model and hook as class number two.
To assess the generalization ability of the classifier on an independent non trained dataset, a Leave One Subject Out (LOSO) Cross Validation scheme was employed. The dataset was partitioned into disjoint sets for training and for test. The number of folds in which the data were partitioned was equal to the number of subjects. For each iteration, the training set consisted of all subjects minus one and the learned function was applied in order to predict the labels on the remaining unused subjects' data. Further operations applied to the data consisted of sample averaging within subjects, mean centering the features using training data and dividing the data vectors by their Euclidean norm. Finally, to estimate the P -value 1000 permutations were run retraining the model by the specified number of times. The same analysis process was applied for the second statistical model ( Open vs Close ) using the data referring to the action goal , selecting open as class 1 and close as class 2.
These data processing steps allow to determine classification performance by computing model accuracy and its significance. Besides model accuracy, the Area Under Curve (AUC) is another measure of model performance. High AUC values correspond to a better performance, whilst a value of 0.5 corresponds to a random performance. In binary classifi-cation, the trade-off between correct and incorrect classification of the samples of the two classes consists of sensitivity/specificity trade-off, thus the ratio between true positive rate (sensitivity) and false positive rate (specificity). In the analysis, these two measures correspond to class 1 and class 2 classification accuracy, respectively, representing the percentage of correctly classified features.

Univariate analysis
The comparison between the four action observation conditions and the control condition ( Obs_Hook_Open vs Ctrl, Obs_Hook_Close vs Ctrl, Obs_Precision_Open vs Ctrl, Obs_Precision_Close vs Ctrl ) revealed significant activations of several cortical and cerebellar areas. Fig. 2 A-D shows group-level statistical maps of activations overlaid on high-resolution MNI template. All activations were analyzed using a statistical threshold of P < 0.001, FWE corrected at cluster level. Common clusters of significant voxels included the occipito-temporal cortex (pMTG, Inferior Temporal Gyrus (ITG)), dorsal and ventral sectors of premotor cortex (PMd, PMv), superior and inferior parietal cortex (SPL/IPL/IPS). The clusters were largely symmetrical, although some of them were more extended in the left hemisphere, such as the IPS and the PMv. The direct comparisons between the main experimental conditions did not show any significant difference. Fig. 2 E shows the flat maps of cerebellar activations during the four observation conditions contrasted with static Ctrl , computed using SUIT toolbox for SPM12 ( Diedrichsen et al., 2009 ). The activated clusters were mostly lateralized and included the right cerebellar lobules VI and VIII. Some clusters were present in both hemispheres, although lateralized in right one, such as the lobule VIII for Precision_Close actions. Clusters peaks were mainly located in lobule VI of right cerebellar hemisphere.
In order to assess specific activations of the MNS areas during the observation task, we assessed BOLD activity at both cortical and cerebellar levels during the execution of the same reaching-grasping actions as those presented in the observation task. Fig. 3 A-D shows the brain activations associated to the execution of the four types of reachinggrasping actions contrasted with the rest condition. All group activations are shown at a significance level of P < 0.001 (FWE corrected at cluster level). The cortical brain areas activated in all conditions included the primary motor cortex (M1), primary somatosensory cortex (SI) and the MCC bilaterally. In addition, common activated areas in the parietal cortex of both hemispheres included the IPL, IPS, and the SPL. In the frontal lobe, common bilateral activations included the premotor cortex (PMd, PMv), the IFG, and SMA. Further activations included also the basal ganglia (putamen and globus pallidus) bilaterally. Similarly to the results of observation task, the comparisons between the activations during action execution did not show any significant differences between conditions.
The cerebellum was strongly activated also during the four motor conditions ( Fig. 3 E), including right lobules V-VI, Crus I, and VIII. Activation peaks were localized at the level of cerebellar vermis and, in the right lateral cerebellar cortex, in lobules VI and VIII. Left cerebellar lobule VI was significantly recruited during execution of Hook actions.
In order to verify the presence of significant voxels that presented shared activation during observation (vs Ctrl) and execution conditions ( Obs&Exe_Hook_Open, Obs&Exe_Hook_Close, Obs&Exe_Precision_Open, Obs&Exe_Precision_Close ) we used a conjunction analysis ( Friston et al., 2005 ). Shared voxels were found bilaterally, in the main nodes of the cortical MNS ( Fig. 4 A-D). Statistical details and MNI coordinates of significant clusters revealed by conjunction analysis are reported in Suppl. Table 4. In particular, significant shared activations were present bilaterally in the parietal cortex (IPL, IPS and SPL), in the occipito-temporal cortex (pMTG. ITG), in PMd and in Left PMv. Shared voxels between observation and execution of the four reaching-grasping actions were present also in the cerebellum ( Fig. 4 E). The anterior cluster in the cerebellar cortex was lateralized in the right hemisphere at the level of lobule VI. A second cluster was present in the right posterior cerebellar hemisphere, mainly located in lobules VIII.

ROI analysis results
The comparisons among experimental conditions were investigated also at ROI level, by using the areas localized on the basis of previous cytoarchitectonic studies (see "Univariate Analysis: ROI analysis "). The averaged PSC within the selected ROIs ( Fig. 5 and Suppl. Fig. 1 ) have been analyzed at group level using a 2 × 2 ANOVA, with Grip and Goal as repeated measures factors. Post-hoc comparisons were computed by using paired-sample t -tests with Bonferroni correction for multiple comparisons (alpha set to P < 0.05 corr.).
The Concerning the analysis performed on the right hemisphere ROIs, a significant effect was present only in Right IPL [F (1, 17) = 7.42, P < 0.01, 2 = 0.34] (Suppl. Fig. 1 ). Post-hoc comparisons indicated that also in Right IPL, similarly to the left one, the BOLD signal was higher during the observation of Precision actions with respect to Hook actions ( P < 0.05).  Table 6 for details about SVM classification performance for both action Grip and action Goal models for each main ROI.

Multivariate analysis results
The results concerning the models run on the right cortical ROIs revealed a significant accuracy only in Right IPL for the grip model (model accuracy = 80%, P < 0.01) (Suppl. Table 7). The results of both Grip and Goal models run on the remaining ROIs evidenced not-significant ( P > 0.05) accuracy.

Discussion
Previous neuroimaging studies using the traditional univariate approach demonstrated that the observation of reaching-grasping actions performed with different goals and grips activates several cortical and subcortical areas belonging to the extended MNS ( Gazzola and Keysers, 2009 ;Hardwick et al., 2018 ;Molenberghs et al., 2012a ). Here, we demonstrate, first of all, that observation of reaching-grasping actions recruits, at cortical level, both dorsal and ventral areas of MNS, irrespective of the final goal of the action or the grip used to perform it. In addition, we show that also cerebellum (lobules VI and VIII) was strongly activated. Direct contrast between observation conditions did not reveal areas selective for the processing of grip or action goal, nevertheless the ROI analysis, performed within areas localized using the conjunction between observation and execution, showed that: (a) multiple areas including Left PMd, PMv, SPL, IPS, bilateral IPL and Right cerebellar lobule VI activated stronger during observation of Precision vs Hook actions; (b) among the areas revealed by conjunction analysis, only the Left IPL showed a modulation of activity for the interaction between grip and action goal . Interestingly, these results have also been extended using MVPA, that reveals a significant decoding accuracy for grip type, not only in the same areas described in the univariate ROI

Fig. 3. Cortical and cerebellar activations related to the contrasts between action execution conditions and Rest ( Exe_Hook_Open vs Rest, Exe _Hook_Close vs Rest, Exe _Precision_Open vs Rest, Exe _Precision_Close vs Rest
). (A-D) 3D MNI152 brain template (MRIcroGL software; https://www.nitrc.org/projects/mricrogl/ ), left view, right view, and two representative parasagittal slices; (E) flat maps of cerebellum (SUIT). All activations are rendered with a threshold of P < 0.001 (FWE corrected at cluster level). Other abbreviations as in Fig. 1 . analysis, but also in additional ROIs, such as Left S1 and Right cerebellar lobule VIII. The MVPA results also confirm the specific role of Left IPL in decoding the final goal of the action, independently from the grip used for its execution.

Cortical and cerebellar activation during observation of complex grasping actions
Considering, first of all, the classical univariate analysis, it confirmed that action observation (vs static control) activates an extended network, involving dorsal and ventral cortical circuits, plus lateral sectors of anterior and posterior cerebellum, mainly on the right cerebellar hemi-sphere. However, this type of analysis is not so fine-grained to reveal differential activations between the main conditions. Action execution activates a typical network, without any clear difference between conditions, similarly to what was found during action observation.
The results of the present study should be compared with those of similar studies employing complex actions. For example, the study of Biagi et al. (2010) , in which participants watched simple and complex reaching-grasping actions, reveals an activation of a similar network of cortical areas. The same type of activation has been shown in a study by Gazzola et al. (2007a) , who asked participants to observe complex hand actions (such as, grasping an espresso cup or removing a tea bag and placing it on a saucer).  Figure 1 . Molnar-Szakacs et al. (2006) , in an fMRI study in which subjects were asked to observe various types of complex unimanual action sequences, found, in all conditions, a similar activation of the parieto-frontal action observation network, and suggested its involvement in the internal simulation of observed sequences of varying hierarchical complexity. Thus, there is general agreement on the activation of the action observation network during observation of complex actions.
Only conjunction analysis, however, allows to reveal if shared activation is present, for each experimental condition, between action observation and action execution, in order to demonstrate the activation of the MNS. Indeed, this analysis reveals that both dorsal and ventral parieto-premotor circuits show this shared activation and that in the cerebellum the shared sectors match those active during pure observation, i.e., the lateral ones. Note that, as one could expect, in the cerebellum strictly motor areas do not emerge from this type of analysis. A similar pattern resulting from conjunction analysis was described in the above-mentioned study of Gazzola et al., (2007a) . This pattern includes both cortical areas and lateral cerebellum. Note that conjunction analysis also reveals a bilateral shared cluster at the level of posterior MTG. While the activation during observation can be expected, it is very likely that during execution condition the activation is due to the visual feedback coming from observation of subject's hand movement during execution.

Type of grip is coded in a larger network of cortical and cerebellar areas with respect to final action goal
The same conjunction analysis enabled us to find differential activation, in given ROIs, between the two types of grip, showing that the activation during observation of precision grip is higher than that during observation of hook grip. This difference should not depend on the type of handle (ring or sphere), because this variable is also present in the control condition, that is subtracted before conjunction analysis. Thus, the possible factors for explaining the differential activation could be grip configuration and its kinematic features.
The kinematic analysis performed on the actions presented to the participants demonstrated a clear difference in wrist velocity and maximum grip aperture between the two types of used grips, namely precision and hook. This argues in favor of an interpretation of the fMRI results as an effect of grip elaboration during action observation. One could also argue that the interaction effect found in the IPL might be associated to an interaction of the kinematic profiles of the four condi-tions. However, both wrist peak velocity and maximal grip aperture do not appear to be modulated by any combination between specific grips and goals, but rather only by the type of grip. This is in accord with previous studies in monkey ( Bonini et al., 2010 ; that investigated the neuronal discharge during grasping actions having the same final goal, but reaching different end points (i.e., a container located near the mouth or near the target). These studies did not show any difference in discharge intensity between the two end points, in spite of kinematic differences.

Disentangling grip type and action goal by means of multivariate pattern decoding in parietal and premotor areas
The results that emerged from the univariate analysis have been further confirmed by the MVPA. This analysis in general allows to investigate more in depth the pattern distribution of voxels encoding subtle information. In the present study, MVPA allowed us to show a high percentage of accuracy in the classification of different grips and dif- ferent final goals. In particular, the classification results relative to the grip demonstrated the involvement of cortical and cerebellar areas in addition to those found with the univariate analysis. On the contrary, the MVPA results about the decoding of the final action goal confirmed the role of the IPL as a critical area for this type of processing, showing the possibility to perform a good discrimination between the two final goals under consideration. This role is in line with the proposal of Koul et al. (2018) , who showed a high level accuracy of IPL in decoding action intention during observation of a reaching-grasping act. However, in their study, participants performed an active intention discrimination task, in which only kinematic cues could be used to differentiate between different intentions. The role of IPL is also underlined by the RS fMRI study of Hamilton and Grafton (2008) , who employed a paradigm in which participants observed unimanual or bimanual actions, having different kinematics and outcomes. Note that their definition of outcome is "the physical consequence of an action ". Their paradigm is comparable to ours, although in our study observed actions were only unimanual. The results show significant RS effect to repeated outcomes with respect to novel outcomes in IPL, while repeated or novel grips did not elicit any RS effect in this latter region. Although we did not use a RS approach, we agree with Hamilton and Grafton study in demonstrating the relevance of IPL in coding action final goal (outcome).
Another study that includes observation conditions similar to ours is that of Wurm and Lingnau (2015) , in which closing or opening actions with different types of kinematics/grips were presented. They found a high percentage of accuracy in decoding different action goals and kinematics/grips in IPL, bilaterally. The accuracy in the decoding of action goal was high also in PMv, bilaterally. Although in the present study we do not find evidence of differential activation of PMv for the action goal, we cannot exclude that using a wider set of stimuli related to a higher number of actions and grips it could be possible to demonstrate action goal coding also in PMv. This would also be in line with monkey studies ( Bonini et al. 2010 ).
The picture emerging from the present study seems to confirm that during observation multiple cortical and subcortical areas are involved in the coding of the grip type, very likely integrating information relative to hand shaping, hand-object interaction, and other kinematic parameters. In particular, activation of PMv, IPS and IPL is in agreement with the well described presence of motor neurons in the F5p-AIP and the F5c-PFG circuits of the monkey showing clear preference for specific types of grip ( Bonini et al., 2012 ;Fluet et al., 2010 ;Murata et al., 1997 ;Raos et al., 2006 ;Rizzolatti et al., 1988 ;Rozzi et al., 2008 ;Sakata et al., 1995 ;Taira et al., 1990 ). A similar finding was also reported for mirror neurons recorded in monkey areas F5 and PFG ( Gallese et al., 1996 ;Rozzi et al., 2008 ). Interestingly, a single neuron study comparing the activity of F5 and PFG neurons while monkeys executed two types of actions (grasping-to-eat and grasping-to-place) with three different types of grip ( Bonini et al., 2012 ) showed that grip coding was highly represented in both areas (more than 70% of recorded neurons). In addition, many neurons (about 40% in average) coded also action goal. These findings indicate that, at least at the high order execution level, both grip and action goal are well represented in the parieto-premotor circuits, suggesting that this double type of coding could be valid also during observation (see also Bonini et al., 2010 , for the demonstration of action goal decoding by mirror neurons in F5 and PFG). From our univariate ROI analysis, action goal per se, as statistical factor, is not significant. However, the MVPA analysis clearly shows a high accuracy in the classification of different action goals, only in Left IPL. A further interesting finding of the above-mentioned work of Bonini et al., (2012) is that a higher percentage of PFG than F5 neurons were modulated by both action goal and grip. This latter observation is very similar to the results of our ROI analysis, where only in Left IPL we found a statistical interaction between action goal and grip.
Interestingly, using MVPA analysis, Buchwald et al. (2018) show high decoding accuracy for areas belonging to the IPS during planning of pantomimed grasp of different tools. They propose that the involvement of anterior IPS in relation to grip formation would be associated to the pragmatic knowledge about tool properties that are mainly encoded in the supramarginal gyrus, that is in IPL. Although their analysis was only related to action execution, a similar result could be also expected by focusing on observation of similar actions and subsequently by performing an MVPA, based on the results of conjunction. Their interpretation about the role of IPL is comparable with our idea that this cortical sector can be crucial for decoding grip-goal interaction.
The coding of grip type is also evident, in our study, in PMd and SPL, belonging to the dorsalmost part of the MNS. This finding is in good agreement with monkey data, showing that in area F2 of PMd and parietal area V6A there are neurons specific for different grips in both motor ( Fattori et al., 2010 ;Raos et al., 2004 ) and mirror ( Papadourakis and Raos, 2019 ) neurons. Thus, coding of grip type is present in both dorsal and ventral parieto-premotor circuits of the MNS. Of course, it is very likely that this coding has a different meaning in these circuits. On the basis of the studies reported in human and monkey literature, it appears that in the ventral circuit coding of grip is more related to the goal of the motor act (e.g. taking possession of objects of different shapes and sizes) ( Ehrsson et al., 2000 ;Gentile et al., 2011 ;Grèzes et al., 2003 ;Raos et al., 2006 ;Rizzolatti et al., 1988 ), while in the dorsal circuit kinematics aspects of hand-object interaction could prevail ( Casile et al., 2010 ;Errante and Fogassi, 2019 ). This does not exclude, of course, an interaction between the two circuits, as also suggested by their anatomical connections ( Caminiti et al., 2017 ;Gamberini et al., 2009 ;Gerbella et al., 2011 ;Rozzi et al., 2006 ). The design of the present study does not allow, however, to disentangle the possible differential role of the two circuits.

Decoding grip type and action goal within cerebellum
Cerebellar activation constitutes a further interesting finding of the study. In this regard, it is clear that there is, as a result of the conjunction analysis, a shared activation in the lateral part of lobules VI and VIII, very likely corresponding to primary and secondary hand representation, according to the classical motor somatotopy ( Grodd et al., 2001 ;Manni and Petrosini, 2004 ;Stoodley and Schmahmann, 2009 ). However, comparing the localization of activation during action execution with that during action observation one can observe a kind of medial-tolateral shift from the former to the latter. While the medial activation can be strictly related to pure motor execution, the more lateral activation could be related to a more abstract motor representation. This would be similar to the activation, at the cortical level, of MI or the premotor cortex, respectively. According to this interpretation, we would have, in the cerebellum, a reproduction of the cortical activation.
Concerning the possibility to distinguish activations related to action goal or grip type, it is clear that there is only an effect of grip type, in the lobule VI, as shown by univariate analysis, and both in lobule VI and VIII, as demonstrated by classification accuracy, using MVPA. It has been already established that cerebellum is activated during action observation ( Abdelgabar et al., 2019 ;Caligiore et al., 2013 ), thus suggesting its contribution to the "motor " resonance with the observed action. Moreover, it has been proposed that cerebellum, during observation of actions performed by others, plays a role of adaptive predictor ( Gazzola and Keysers, 2009 ;Sokolov et al., 2017 ). This proposal is in line with the classical view of cerebellar functioning, since the cerebellum has been described as a controller structure which plays a fundamental role during hand object interaction, instructing the cerebral cortex in a predictive manner ( Sokolov et al., 2017 ). On the other hand, it has also been proposed that cerebellum contribute during action perception is relative to the processing of time and sequences, thus operating in a forward modality ( D'Angelo et al., 2011 ;D'Angelo and Casali, 2012 ). The present work suggests that cerebellar contribution during observation is more related to the way in which grasping is executed, probably participating to kinematic coding of grip, while it does not suggest the participation of this structure to a discrimination between action goals. Thus, it is possible that the observation of the four actions used in this study activates the cortico-cerebellar pathway, producing a kind of simulation of the observed action, mirroring the type of grip control that would occur during actual execution.

Conclusions
The present study shows that during action observation a high number of parietal, premotor and cerebellar structures can decode the type of grip used by the observed agent, while only inferior parietal cortex is able to distinguish between different action goals, suggesting a leading role of this region in this type of decoding. Future studies could employ a wider set of stimuli taking into account elements that could influence MNS activations, such as, for example, observer's perspective, action familiarity, contextual cues, motivational states and social factors ( Amoruso and Finisguerra, 2019 ;Aziz-Zadeh et al., 2018 ). It would be also interesting to verify the possibility to disentangle the processing of kinematic aspects from the type of used grip, for example adopting experimental paradigms that specifically modify some kinematic aspect, keeping invariant the object and the grip (e.g. varying the velocity or the trajectory). The use of many variables would allow the employment of wide-ranging data analysis techniques such as, for example, representational similarity analyses, which may yield a broader understanding of the role of other cortical areas, and their contribution to the decoding of different action goals and grips during action observation.

Declaration of Competing Interest
None.