Identification of cerebral cortices processing acceleration, velocity, and position during directional reaching movement with deep neural network and explainable AI

Cerebral cortical representation of motor kinematics is crucial for understanding human motor behavior, potentially extending to efficient control of the brain-computer interface. Numerous single-neuron studies have found the existence of a relationship between neuronal activity and motor kinematics such as acceleration, velocity, and position. Despite differences between kinematic characteristics, it is hard to distinguish neural representations of these kinematic characteristics with macroscopic functional images such as electroencephalography (EEG) and magnetoencephalography (MEG). The reason might be because cortical signals are not sensitive enough to segregate kinematic characteristics due to their limited spatial and temporal resolution. Considering different roles of each cortical area in producing movement, there might be a specific cortical representation depending on characteristics of acceleration, velocity, and position. Recently, neural network modeling has been actively pursued in the field of decoding. We hypothesized that neural features of each kinematic parameter could be identified with a high-performing model for decoding with an explainable AI method. Time-series deep neural network (DNN) models were used to measure the relationship between cortical activity and motor kinematics during reaching movement. With DNN models, kinematic parameters of reaching movement in a 3D space were decoded based on cortical source activity obtained from MEG data. An explainable artificial intelligence (AI) method was then adopted to extract the map of cortical areas, which strongly contributed to decoding each kinematics from DNN models. We found that there existed differed as well as shared cortical areas for decoding each kinematic attribute. Shared areas included bilateral supramarginal gyri and superior parietal lobules known to be related to the goal of movement and sensory integration. On the other hand, dominant areas for each kinematic parameter (the contralateral motor cortex for acceleration, the contralateral parieto-frontal network for velocity, and bilateral visuomotor areas for position) were mutually exclusive. Regarding the visuomotor reaching movement, the motor cortex was found to control the muscle force, the parieto-frontal network encoded reaching movement from sensory information, and visuomotor areas computed limb and gaze coordination in the action space. To the best of our knowledge, this is the first study to discriminate kinematic cortical areas using DNN models and explainable AI.


Introduction
Human beings make ceaseless and countless movements in their lives. Motor behavior is manifested by various motor characteristics including muscle and joint movements, force and momentum, and motor kinematic parameters such as speed, acceleration, position, and direction of limbs (Ashe and Georgopoulos, 1994;Bourguignon et al., 2019;Humphrey et al., 1970;Jerbi et al., 2007;Kakei et al., 1999). The relationship between neural activity and various motor characteristics is important for understanding the movement behavior of a person.
It also has a practical implication to improve the decoding performance of motor brain-computer interfaces (BCI).
Previous studies using non-human primates have found that neuronal activity is closely related to various motor kinematic parameters. An early study on the relationship between neuronal activity and motor kinematic parameter has shown that neurons in the primary motor cortex (M1) can selectively respond to a specific direction of reaching movement (Georgopoulos et al., 1986). A further study has reported that neuronal activities in M1 and Brodmann area 5 (BA5) are significantly correlated with time-varying motor kinematic parameters such as velocity, position, acceleration, and direction (Ashe and Georgopoulos, 1994). Two notable results were drawn from that study. First, neurons in M1 and BA5 preferred a specific kinematic parameter. Among neurons recorded in the M1 and BA5, the proportion of neurons that preferred velocity was the highest, followed by position-preferred neurons and acceleration-preferred neurons. This neuronal preference toward a specific motor kinematic characteristic has been validated in subsequent studies (Churchland et al., 2006a;Moran and Schwartz, 1999;Wang et al., 2007).
Second, this preference varied across cortical areas. Acceleration-preferred neurons were found in BA5, but not in M1. These results suggest that the neural system of animals has the processor for the motor kinematics even at the small brain level. Moreover, the neural process for kinematics may vary across brain areas. Since the spatial coverage to measure a single neuron activity was limited to a very focal area (Sejnowski et al., 2014), it was hard to identify neural representation that appeared throughout the brain. Thus, finding a cortical representation of each kinematic parameter with macroscopic neural data has been an important issue.
Non-invasive studies of humans have suggested that large-scale cortical activity also seems to be closely related to kinematic parameters. Jerbi et al. (2007) has found that reaching speed is significantly coherent with cortical source activity from M1, premotor cortex (PM), posterior parietal cortex (PPC), and dorsolateral prefrontal cortex (DLPFC). A similar coherence map was found not only at speed, but also at acceleration (Bourguignon et al., 2012). Taken together, motor kinematic parameters appeared to be handled in a large-scale network that centered around the motor-related cortex. However, since these two coherence maps were presented similarly, cortical areas related only to acceleration or velocity could not be identified from these studies mentioned above. We postulate that coherence-based mapping has a limitation to distinguish cortical areas for each kinematic parameter because spatial resolution of a non-invasive method is too limited to segregate them. Instead of mapping a simple relation of cortical activity with kinematic parameters, an alternative approach is needed to map cortical areas contributing to the production of each kinematic parameter.
The purpose of this study was to identify cortical areas for each kinematic parameter. To achieve this purpose, a novel approach was employed to complement shortcomings of limited coverage of single neuron studies and limited resolution of large-scale studies. Using deep neural network (DNN) models of a relationship between large-scale activities, we analyzed the contribution of cortical areas used to decode each kinematic parameter. We adopted a state-of-the-art DNN model that could accurately decode each kinematic parameter from the source activity of cortical areas estimated from magnetoencephalography (MEG). Potential capabilities of the neural network model have recently been actively discussed in the field of BCI. Decoding performance of an electroencephalography (EEG)-based BCI system has been found to be improved through a convolutional neural network (CNN) model (Lawhern et al., 2018;Schirrmeister et al., 2017). We have also previously decoded reaching trajectories from MEG signals using time series DNN model with a high accuracy (Yeom et al., 2020).
One of the most noticeable shortcomings of the DNN model is that the model works as a 'black box' because the inner structure of the model is composed of complex connections between multiple layers consisting of hundreds of parameters. Although decoding performance is excellent, it is hard to identify neural features that the DNN model uses. To open the 'black box', 'explainable AI', another state-of-art method, was applied in this study to evaluate the cortical area's contribution inside the DNN model for decoding each kinematic parameter. This method can evaluate contribution of input neural data when DNN models decode an output (Hoffman et al., 2018).
These approaches allowed us to construct a high-performing decoding model to decode each kinematic parameter and to identify large-scale cortical representation based on the cortical area's contributions inside the highperforming model for each specific kinematic parameter.

Experimental procedures
The same datasets from Yeom et al. (2013)were used in this study. Briefly, MEG signals were recorded during the center-out movement task in a 3D space. Nine right-handed subjects participated in the study. Their mean age was 26.7  6.8 (mean  standard deviation) years (range, 19 to 37 years). Subject's handedness was evaluated through the Edinburgh Handedness Inventory. All participants were confirmed as right-handed with a performance of more than 80 points (87.2  5.7). Subjects were instructed to move their right hands toward the target represented in a 3D space with a visual movement cue during the experiment. Visual stimuli, center and target spheres in the 3D space, were presented on screen with an STIM2 system (Neuroscan, El Paso, USA). At the beginning of the experiment, participants were instructed to hold their index finger of the right hand on the sphere presented in the center of the screen for 4s. Thereafter, a target sphere that connected with the line was presented randomly in one of four corners for 1s. Within the given time, participants were asked to move to the target and then return to the center (a center-out paradigm) as fast and as accurate as possible. This center-out task was repeated in 120 trials per session. All participants repeated the session twice. The detailed procedure was described in (Yeom et al., 2013). Experiments were approved by the Institutional Review Board (IRB) of Seoul National University Hospital (approval number: 1105-095-363). They were performed in accordance with the Declaration of Helsinki.

MEG and motor signal acquisition
MEG signals were recorded during the experiment. A whole-head MEG system (VectorView, Elekta Neuromag Oy, Helsinki, Finland) was used to measure brain signals during the experimental task. Along with brain signals, reaching movements were measured with a three-axis accelerometer (KXM52, Kionix, NY, USA). Both brain and movement signals were recorded at 600.615Hz. To mitigate artifacts, signals were acquired in a magnetically shielded room.

Signal preprocessing
Before analysis, both MEG and accelerometer signals were preprocessed to extract motor-related components.
First, about MEG, only signals from 204 gradiometers were used due to a high signal-to-noise ratio (SNR).
About the accelerometer, the signal was filtered to 0.8-5 Hz. To estimate the reaching velocity and position signals, the accelerometer signal was integrated. In this process, integral errors occurred. We corrected these errors by correcting linear trends and by setting initial reaching kinematic parameters to zero-point. For an efficient process, both MEG and movement signals were resampled to 50 Hz. These signals were then divided into epochs, -1s to +2s from the visual cue onset. All participants completed the reaching movement within this period for all trials.
The baseline period was set to -1s to 0s. Each trial was corrected by subtracting the baseline. Thus, 240 epochs were collected per subject (60 per direction) and used for the analysis.

Cortical source estimation
From preprocessed MEG signals, cortical source signals of each epoch were estimated by dynamic statistical parametric mapping (dSPM) (Dale et al., 2000). dSPM values of epochs were computed with a minimum-norm inverse operator (Hämäläinen and Ilmoniemi, 1994). In the computation, to increase the SNR of the source, noise covariance of the MEG signal was used for pre-whitening MEG epochs. Computed dSPM values were projected in FreeSurfer source spaces (Dale et al., 1999;Fischl et al., 1999a;Fischl et al., 1999b), 4096 vertices per hemisphere. For efficient processing, we scaled the size of the source space from 8192 vertices into 445 cortical areas. The source space was divided into Desikan-Killarney cortical parcellations (which were noted as 'aparc_sub' parcellation in the MNE library), consisting of 445 sub-areas within 70 bi-hemisphere areas (Fischl et al., 2004).
Time-course activities in each area were then computed by averaging. The whole sequence to compute source estimations was done using a Python library, MNE (Gramfort et al., 2013).

Deep Neural Network model
A model with a high decoding ability was needed to accurately infer the relationship between cortical source activity and each kinematic parameter. To this end, we used a state-of-the-art model called DNN. Since DNN is more biologically plausible than conventional linear regression, motor kinematic parameters can be more accurately decoded by the DNN. Among various neural network architectures, a recurrent neural network (RNN)based architecture was selected. This is because RNN is appropriate to decode time-series or sequential data (Petneházi, 2018). The specific architecture type of RNN was long-short term memory (LSTM; Figure 1A) (Hochreiter and Schmidhuber, 1997). LSTM can process time-series data by adjusting states which are cumulated from the past (memory) and by updating them with current information. This process can effectively address the gradient vanishing problem that appears when the model is fitted through error back-propagation. An overview of the LSTM operation can be summarized in the following equations. 'Gates' in the LSTM can update time-series (or sequencing) information and apply it to the current feature. First, LSTM concatenates current data ( ) and past information (ℎ −1 ) and then expresses them to gates. The forget gate notated in equation (1) adjusts memories (called 'state') conveyed from the past state ( −1 ) by the forgetting ratio. The function has a sigmoid function.
In equation (2), the input gate decides the amount of information that should be saved in memories. By adding forgotten memories and new information from the input gate, a new state of memories ( ) is made (3). Finally, the current decoded output is made by multiplying sigmoidal information and the vector of memories as noted in equation (4).
To increase decoding performances, a bidirectional LSTM (bLSTM) method was applied so that time series computation by LSTM was done bidirectionally (Schuster and Paliwal, 1997). The model had eight layers, including an input layer with a sequencing function, two layers of bLSTM with two batch-normalization layers to cope with gradient vanishing and overfitting problem, and two dense layers. The last dense layer is regression layer which made the 3-axis coordination of kinematic parameters in a 3D space. A simple summarization of the model is described in Table 1 and plotted in Figures 1B.

DNN model fitting
Before fitting the model, datasets for training and testing were constructed. Whole trials per subject (240 trials per subject) were split five-fold. Four of them were used as training datasets and the other was used as testing dataset. Five-fold cross-validation was used to validate the whole data. Then the input data were normalized through Min-Max scaler. By the scaler, the range of input data was between 0.0 and 1.0. It helps to make the model's gradient follow down to the global minima point. In addition to the data normalization, the weights of the DNN models were initialized. The initialization method was Xavier-initialization (Glorot and Bengio, 2010). We sampled the initial weight values from Xavier-normal distribution since it was empirically found to make the best performance. To fit the model, losses between decoded kinematic parameters from brain sources and real ones were computed. Losses were then propagated toward layers within the model. The loss function was a mean absolute error (MAE). Adaptive moment estimation (ADAM) (Kingma and Ba, 2014) optimizer was used as a learning rule. The learning rate was set to be 0.001. For efficient optimization, a mini-batch training method (Li et al., 2014) was used. The mini-batch size was set to be eight trials. Thus, the training loss was updated per eight trials. The model was trained by iterating the whole train dataset 100 times. During training, we only saved training parameters with the highest decoding accuracy to conserve the best performance. An ensemble training method was adopted to decode kinematic parameters more accurately and decrease decoding variance (Zhou et al., 2002). Five models with the same structure but different random states were trained. Decoded kinematic parameter from these models were then averaged. The whole process was done with Python APIs, Keras (Chollet, 2015), and Tensorflow (Abadi et al., 2016).

Contributions of cortical areas for each kinematic parameters by explainable AI
Although a DNN model has a powerful decoding ability, it has a shortcoming in explaining the model itself. It is difficult to explain the DNN model because it has hundreds of thousands of trainable parameters (known as 'neurons') and complex dimensions with many layers. Thus, 'explainable AI', a method that can evaluate the contribution of each characteristic of the input value in computing the decoded output of the model, is currently drawing attention (Lawhern et al., 2018). In the present study, cortical neural activity of each area was used as input for DNN models. A kinematic parameter was then decoded. Thus, we could map the contribution of each cortical area for decoding each kinematic parameter. Among various explainable AI methods, an axiomatic attribution method named 'integrated gradients' (Sundararajan et al., 2017) was used in this study. This method follows two axioms. The first axiom is called 'sensitivity'. If input ( ) data and their baseline ( ̅ ) had a single different feature with a different decoded output, the feature should have a non-zero attribution. Gradient-based methods could not fulfill the role of this axiom in some cases (Shrikumar et al., 2016). The second axiom is called 'implementation invariance'. When two or more models with the same decoded output for the same input data with respect to their structure are used, attributions of data should be the same for these models. Although gradientbased methods could fulfill the role of this axiom by using the chain rule, other methods such as LRP and DeepLift could not fulfill it (Sundararajan et al., 2017). Integrated gradients were designed to follow both axioms. Such integrated gradient is presented in equation (5).
A basic concept of integrated gradients is to calculate the difference between the input and the baseline by integrating the parameter. Here, the notation F is the model fitted. The proportion of input data and the baseline were adjusted by 'ɑ'. The equation is transformed into equation (6) after applying Reimann integration for implementation. If a partition value is large enough, the output of the calculation is approximated to integration.
The value was set to be 100, which should be enough to approximate equation (5). In this study, the baseline was set to be zero-matrix (a non-movement state). As a result, 25 contribution matrices were calculated for each subject (5 models x 5-fold data). Matrices per subject were averaged and then used in the analysis.

Decoding performance
To evaluate performances of DNN models, Pearson's correlation was calculated between real kinematic parameters and decoded ones ( Figure 2A). As a result, every motor kinematic parameter was decoded with significantly high correlation (r > .810, p < .001). Velocity (r = .858, SEM = .026) had highest mean value of correlation, followed by position (r = .854, SEM = .024) and acceleration (r = .811, SEM = .025). Their differences were not significant (F(2, 24) = 1.385, p = 0.270). Decoded kinematic parameters for acceleration, velocity, and position were visually displayed in a 2D ( Figure 2B and Supplementary figure 1) and 3D ( Figure 2C) space. Both results showed that decoded trajectories from all kinematic parameters followed real ones. Taken together, each DNN model could decode the corresponding kinematic parameter reasonably well. Thus, we presumed that neural features of the corresponding kinematic parameter were appropriately represented in DNN models. These neural features were used for the contribution map in the next section.

Cortical area's contributions to decode each kinematic parameter
By explainable AI, the contribution of each area for decoding kinematic parameters was extracted from DNN models. The contribution was expressed as [240 trials x 445 areas x time] for every subject. The contribution was generalized with the following procedures. First, the trial dimension was averaged. Time-course contributions during reaching movement (0 ms onset from the visual cue to the offset of movement) were then averaged to reconstruct a representative contribution of each area. Representative contributions of 445 cortical areas were obtained per subject.
We tested the significance of the contribution of 445 cortical areas to decode each kinematic parameter. A onesample t-test was conducted for each kinematic parameter. The baseline for analysis was set to be the mean value of the contribution. Areas with significantly higher contributions than the mean (p < .05 level) were left. As a result, large-scale cortical maps were made according to each kinematic parameter. Cortical areas within each map were categorized into two groups: shared areas and dominant areas. The notation 'shared' meant that the intersecting set of areas was significantly contribute to all kinematic parameters. The notation 'dominant' indicated that the set of areas significantly contributed to a specific of kinematic parameters. The dominant areas were difference of sets of areas for the other two. These areas were marked on the FreeSurfer inflated surface for visualization (Figures 3 to 5). We found that 35 areas significantly contributed to decoding all kinematic parameters (p < .05) (Figure 3). These areas were named as shared areas. These shared areas were mostly in PPC with adjacent areas in the temporal lobe and occipital cortex. Bilateral supramarginal gyrus (SMG, BA40), Brodmann area 5 (BA5), parts of left angular gyrus (AG, BA39), and parts of the left occipital association areas (BA19) including V3A and right V2 area (BA18) were also significant.

Acceleration dominant areas
First, we identify 40 areas mostly located in the contralateral motor cortex and ipsilateral visual areas (Figure 4), which significantly contributed to acceleration decoding. Such areas included the contralateral arm and hand regions of M1 (BA4) and S1(BA2, BA3), a supplementary motor area (SMA, within BA6), and an inferior part of DLPFC (BA46). Areas within visual cortices were also found to contribute to acceleration decoding. Such areas are mainly presented in the ipsilateral visual association cortices (BA19) such as the middle temporal (MT) area known as the visual

Velocity dominant areas
Twenty-five areas contributed only to velocity decoding. These areas were in the contralateral parieto-frontal reaching network, including the dorsal part of the premotor cortex (PMd within BA6), parts of BA5, the medial part of IPS (mIPS), an anterior part of DLPFC (BA10), and a part of bilateral AG. Like acceleration areas, some parts of the right BA18 within the visual areas were also significant ( Figure 5). Ventral and posterior parts of the bilateral superior temporal sulcus (STS) also significantly contributed to velocity decoding.

Position dominant areas
Forty-seven areas were identified in the bilateral visual area for spatial processing and the ipsilateral motor cortex for saccadic eye-movement control. These position areas are presented in Figure 6. These areas consisted of

Mapping cortical areas for movement kinematic parameter with DNN and explainable AI technique
Through a DNN model and explainable AI, we uncovered the specific cortical representation for each kinematic parameter such as acceleration, velocity, and position. We built a bLSTM-based DNN model to figure out the region of cortical activity related to each kinematic parameter for reaching movement. The DNN model successfully decoded acceleration, velocity, and position of the movement. We then used explainable AI to extract cortical representation embedded in the DNN model. Therefore, we successfully distinguished cortical maps for acceleration, velocity, and position, respectively.
This approach allows us to overcome limitations of previous studies on single neurons and large-scale noninvasive cortical activity. Studies on single-neuron activity and motor kinematic parameters have found a relationship between motor neuron activity and kinematic parameter using a computational model (Ashe and Georgopoulos, 1994;Moran and Schwartz, 1999;Wang et al., 2007). Although the method has a very fine spatial resolution, the coverage was too focal to map the cortical activity across the whole brain corresponding to each motor kinematic parameter. To overcome this limitation, whole-brain activity was measured during reaching movement with a non-invasive method. Relationships between large-scale cortical sources and kinematic parameters were learned by DNN models. DNN models' attributions for kinematic parameters were then analyzed by explainable AI for large-scale cortical maps. Although the spatial resolution of this approach is lower than the single-neuron activity, it is enough to segregate large-scale cortical representation for each kinematic parameter.
Another method, the coherence between non-invasive MEG source signals and motor kinematic signals (Bourguignon et al., 2012;Jerbi et al., 2007), has the advantage of measuring the whole brain activity for motor kinematic parameters. However, it has a limitation in that coherence maps of kinematic parameters are superimposed and hard to be distinguished. This might be because the spatial resolution is too low to segregate cortical representations according to each kinematic parameter.
Although a neural network model has a high decoding ability, it is difficult to know what is happening inside them. This is because the neural network model consists of multiple hidden layers of hundreds of thousands of parameters (in here, about 3,000,000 parameters of 8-layers). Currently, there are approaches to explain the inside of a neural network model. One of them is explainable AI method. It can map the attribution of models by calculating the contribution of each part of input data for decoding output results (Hoffman et al., 2018).
Using such method, features used by neural network models could be extracted and mapped from the input data. Based on this point, we employed an explainable AI method for neuroimaging data. One of various explainable AI methods, integrated gradients (Sundararajan et al., 2017), was used in this study. With this method, cortical areas that contributed to decoding acceleration, velocity, and position were extracted from the DNN model. To identify cortical representations according to kinematic parameters, we mapped contributing areas and segregated them into dominant areas. In cortical maps, there were also shared areas that appeared for all kinematic parameters. Therefore, it can overcome the limitation of previous large-scale coherence studies which could not distinguish each kinematic parameter.
A machine learning (ML) model to decode behavior from non-invasive neural signals and a method to analyze neural features used by the model can provide more reliable neural features than conventional methodology such as PCA (Hatamimajoumerd et al., 2020). For example, hand direction (Wang et al. 2010) (Wang et al., 2010), different hand movements (Belkacem et al., 2018), or invariant visual object information (Isik et al., 2014) could be classified from the MEG signals through LDA and support vector machine (SVM). The neural features used by the ML model reflected robust findings such as the activity of motor cortices for movement or the role of posterior occipital regions for visual information. However, the method based on the conventional ML also seems to be insufficient to sensitively identify the cortical representation of various components constituting a behavior. Beyond the conventional ML method above, this study increases the reliability and sensitivity by using the state-of-art and biologically plausible method, called the DNN model. The DNN model can increase the decoding accuracy higher than in a previous study used a conventional time-series ML model (Yeom et al., 2013). We can extract neural features used by the high-performing models through another state-of-art method, explainable AI. As a result, we can identify cortical areas for acceleration, velocity, and position from the directional reaching movement.
Results showed that cortical areas for acceleration, velocity, and position within the visuomotor network could be identified. Among these areas, both shared and dominant characteristics were observed across areas of acceleration, velocity, and position. Thus, we segmented the cortical areas into shared and dominant areas and discussed them below.

Shared areas representing sensory processing and action goal selection for reaching movement
Cortical areas of acceleration, velocity, and position could be obtained from DNN models and explainable AI.
Some areas were overlapped. We defined these overlapped cortices as shared areas. We speculated that there would be common functions in the neural system for all kinematic parameters. In particular, functions for sensory integration and goal selection would be important to produce common kinematic attributes. First, we considered sensory integration as a common kinematics parameter. For successful reaching movement, our brain has to integrate various kinds of external sensory information from limbs and organs to minimize motor errors (Sober and Sabes, 2005). Goal-selecting behavior may represent a common attribute of all kinematic parameters. Although kinematic parameters have unique behavioral characteristics, their common purpose is to set a goal of movement and reach it.
Our DNN models could identify not only cortical areas according to kinematic parameters, but also shared functions for them. In the present study, shared areas were identified as SMG, superior parietal lobule, and AG within PPC by DNN models and explainable AI. PPC is known to convert external sensory information of limbs to an internal kinematic model and then locate them in the desired location (Buneo and Andersen, 2006).
Since external sensory information input for each kinematic parameter is integrated into the internal kinematics model in the neural system, shared areas seem to contribute to all kinematic parameters. A distinct feature of shared areas is that these areas are centered around the bilateral SMG. This region is related not only to somatosensory integration, but also to the function of goal selection. Fogassi et al. (2005) has found that neurons in the primate left IPL, which includes SMG and AG, represent action goals. Inducing a virtual lesion on this site by rTMS can cause a delay in goal-oriented action (Tunik et al., 2008). Based on functions of these areas, we speculate that shared areas might be identified for all kinematic parameters based on these two behaviors.

Dominant areas showing unique characteristics for decoding each kinematic parameter in 3D space
We found that each kinematic parameter was identified in the cortices of the motor-related network. In our study, acceleration areas consisted of the contralateral motor cortex such as the hand and arm regions of M1 and S1, SMA, and a subregion of DLPFC (BA46). This result indicates that these motor cortices can best explain behavioral attributes of acceleration. Contrary to previous studies (Ashe and Georgopoulos, 1994;Bourguignon et al., 2012;Jerbi et al., 2007;Kadmon Harpaz et al., 2018;Moran and Schwartz, 1999;Wang et al., 2007) that the motor area was related to all kinds of movement kinematic parameters, we found that these motor-execution areas only represented acceleration. Although findings of the neural relationship between the M1 (or motor region) and movement acceleration have also been presented (Bourguignon et al., 2012;Kadmon Harpaz et al., 2018), they could not address the issue that arose in the overlapped relationship between motor cortices and all kinds of kinematic parameters since they did not include velocity or position. Different from previous studies, our study showed dominant cortical areas to produce acceleration by using DNN models and explainable AI methods.
In the reaching movement, behavioral attribution of the acceleration is closely related to the control of muscle force. The force increases with the initiation of reaching movement from a stationary state. To reach the target, the force decreases for fine adjustment of limb action. The muscle force during a motor execution can be described as an acceleration. The reason why M1 appears only in acceleration might be because the region is known to encode muscle force (Evarts, 1968;Kakei et al., 1999). In large-scale studies (Bourguignon et al., 2012;Jerbi et al., 2007), like M1, S1 is also found to be related to several kinematic parameters. We also found that S1 was significantly involved only in acceleration. It might be due to somatosensory processing for muscle control such as proprioception (Tuthill and Azim, 2018). An interesting point was that different roles were found depending on the region within the premotor cortex (Brodmann's area 6, BA6) which involved movement preparation. SMA was shown to encode acceleration. On the other hand, the premotor area (here, PMd) was shown to encode velocity. We noted that SMA was involved in the initiation and execution of movement (Eckert et al., 2006;Krainik et al., 2001). Compared to premotor area which prepares a visualguided movement, SMA encodes the initiation and execution of upcoming motor sequences (Roland et al., 1980a;Roland et al., 1980b). Functions of contralateral sensorimotor areas and SMA can explain the behavior of humans to produce acceleration. These areas are presented in the acceleration dominant areas in our DNN models.
Considering a behavioral characteristic of movement, velocity is presumed to be the most important variable for preparing a reaching movement and visuomotor transformation. Participants prepared a reaching movement based on the velocity (or speed) because they intended to reach their hand and drawback to an initial point as fast as possible. Based on the behavior, cortices that accounted for the velocity appeared in regions for visuomotor transformation and motor planning. Areas that account for these functions are mIPS and PMd within the parieto-frontal network, which is one of key systems for visual-guided reaching movement (Burnod et al., 1999). A parietal part of the network, mIPS, encodes reaching movement from visuospatial information of action space (Vesia and Crawford, 2012). The encoded information is known to include action intentions (Vesia and Davare, 2011). Considering the importance of mIPS in velocity decoding, it is presumed that the encoded information represents the velocity of movement rather than the position or acceleration. The encoded reaching movement was then transmitted to PMd along with the parieto-frontal network. A frontal part, PMd, is well-known as a reaching region in the motor preparation area (Churchland et al., 2006b). For the velocity of movement, the velocity of upcoming reaching movement can be decoded from neural signals of PMd (Churchland et al., 2006a). After motor preparation based on the velocity was transmitted to areas for motor execution, the reaching movement could be executed using muscle control by acceleration.
Cortical areas for position were also identified from large-scale visuomotor cortices such as SPOC, LIP, FEF, and SEF. These regions are known to play a role in visuospatial processing and eye (or saccadic) movement control (Andersen et al., 1990;Purcell et al., 2012;Vesia and Crawford, 2012;Vesia et al., 2010). Positional information on limb movement is mainly encoded by visual processing. The first behavior participants took was capturing the target based on spatial information. They then established a movement space from the hand to the target. After starting the movement, participants tracked their hands in the space and continuously updated the displacement between the moving hand and the target to reach it accurately. Since saccadic eye movement and a spatial process between a hand and the target in the action space are necessary for an accurate movement, visual-motor areas significantly contribute to position decoding. We found the contribution of a visuospatial area called V3a of visual cortices. Along with visual space area, SPOC in the occipito-parietal junction (Vesia and Crawford, 2012) corresponds to a parietal reach region (PRR) of primates, which encodes reaching movement (Galletti et al., 2003;Vesia and Crawford, 2012). Disrupting the function of SPOC by rTMS stimulation can increase positional errors of reaching movement (Vesia et al., 2010). According to the gain fields theory, receptive fields of this region can be modulated by gaze position, target, and hand position (Chang et al., 2009). The displacement between the hand and gaze has been encoded by PRR in a primate study (Blohm and Crawford, 2009). For hand position, saccadic eye movement information seems more important than motor information of reaching control. The saccadic eye movement along with the target and the reaching motion is also important in addition to visuospatial information. Areas such as LIP, SEF, and FEF also appeared to contribute to position decoding. These areas are known to be related to the control of a saccadic eye movement. In the parietal part, LIP is known to encode a saccadic movement during reaching (Andersen et al., 1990). In the frontal part, SEF and ipsilateral FEF are known to participate in the initiation of saccadic movement and oculomotor control (Purcell et al., 2012;Schall, 2004). Since ipsilateral motor-related areas are presented along with FEF, the contribution of these regions may also represent eye-related information. The identification of these areas for position can also be due to peripersonal space processing between the hand and the target. An ablation study using primates has shown that lesions in FEF can cause inattention to stimuli presented in the space (Berti et al., 2001). In humans, ipsilateral brain damage can cause neglect in the peripersonal space (Halligan and Marshall, 1991). Visuospatial areas in our results possibly reflect spatial processing between participants' hand and the target.
Interestingly, in our results, a specific cortical area was segregated into sub-regions according to kinematic parameters. It is typically found in the intraparietal sulcus, DLPFC, and occipital lobe. As we discussed above, DNN models used neural information from subregions within the intraparietal sulcus according to the velocity and position. The identified IPS regions seemed to be matched with previous findings (Andersen et al., 1990;Vesia et al., 2010). Within the DLPFC, an anterior part, a posterior part, and a more inferior part than the velocity part were found to contribute to velocity, position, and acceleration, respectively. Well-known functions of DLPFC are attention and decision-making. However, DLPFC seems to be somewhat involved in movement control (Bourguignon et al., 2012;Jerbi et al., 2007;Ryun et al., 2014). As in the case of the intraparietal region, we speculate that a specific subregion of DLPFC might control corresponding kinematic parameters. To this end, further research is needed to clarify the relationship between the subregion of DLPFC and each kinematic parameter.
In cortical areas for kinematic parameters, functional lateralization across hemispheres is found according to motor-related functions. It seems likely that motor-related functions are processed in the contralateral cortex, whereas other functions such as processes about the motion of the body and space and goal-selection are in bilateral or ipsilateral cortices. Current notions about motor behaviors are that limb is controlled by contralateral motor areas such as M1, PMd, S1, and BA7 (Bourguignon et al., 2012;Churchland et al., 2006a;Jerbi et al., 2007). In the present study, such contralateral motor cortex was found to be valuable in cortices for acceleration and velocity. This is because acceleration is related to muscle control and velocity is processed through parietofrontal network for visuomotor transformation. We speculate motor control or sensory-motor transformation is required for kinematic parameter processing. It may involve the role of contralateral cortices. On the other hand, concerning spatial processing or action goal-selection, such lateralization seemed to be weaker. V3 and SMG were found bilaterally. Some areas in the ipsilateral hemisphere were found to be involved in kinematics processing. Such ipsilateral cortices were found in the visual association area of acceleration areas and primary sensorimotor area of shared areas. Unfortunately, roles of the ipsilateral hemisphere for visuomotor behavior are unclear. The contribution of ipsilateral areas might be due to processing of accelerating motion of biological limbs (Limanowski et al., 2018;Schlack et al., 2007) or an inter-callosal inhibition of motor cortices to facilitate kinematics control of contralateral limbs (Kobayashi et al., 2004). Further research on functional lateralization of the kinematics map would be necessary in the future.
Current non-invasive neuroimaging methods have been based on the simple relationship between neuraland behavioral signals such as cortical-kinematic coherence (Bourguignon et al., 2019;Jerbi et al., 2007) or general linear model. However, we considered that there is a limitation of sensitivity to segregate cortical representation according to the various cognitive components hidden in a behavior. Here, a state-of-art methodological scheme is presented to complement the limitation. With the DNN model and explainable AI, it is possible to sensitively map the kinematic representation of cerebral cortices hidden inside the biologically plausible neural model. Beyond the representation of motor components, we hope to apply this methodological scheme in exploring neural representations of various human behavior and cognition, such as semantic representation according to words, neural representation of visual objects, or kinetics of movement.

Limitations
The reliability of maps that could be extracted by explainable AI depends on the performance accuracy of neural network models (Lawhern et al., 2018). If there is a model that can perfectly decode kinematic parameters from neural data, cortical areas for these parameters will be identified more exactly. However, our model did not reach that level. Nevertheless, since our DNN models significantly decoded all kinematic parameters above r = .80 (p < .001), we speculate that this performance is the current state-of-art. We consider that cortical areas are reliable enough to understand the kinematic parameters. In the future, a more biologically plausible and more accurate neural network model may produce a more reliable and sophisticated cortical contribution map.
Currently, many kinds of explainable AI methods are being implemented. Imaging results may vary depending on the explainable AI method (Ancona et al., 2017). In our study, we implemented the method called integrated gradients. The method is known to effectively control issues derived from two axioms which could not be fulfilled by other previous methods (Sundararajan et al., 2017). We expect that more sophisticated imaging will be possible through the development of explainable AI methodologies in the future.

Conclusion
In this study, cortical areas for acceleration, velocity, and position were identified using DNN models and explainable AI. The cortical representation of the kinematic process was identified in two folds. In shared areas,