Multidimensional cerebellar computations for flexible kinematic control of movements

Both the environment and our body keep changing dynamically. Hence, ensuring movement precision requires adaptation to multiple demands occurring simultaneously. Here we show that the cerebellum performs the necessary multi-dimensional computations for the flexible control of different movement parameters depending on the prevailing context. This conclusion is based on the identification of a manifold-like activity in both mossy fibers (MFs, network input) and Purkinje cells (PCs, output), recorded from monkeys performing a saccade task. Unlike MFs, the PC manifolds developed selective representations of individual movement parameters. Error feedback-driven climbing fiber input modulated the PC manifolds to predict specific, error type-dependent changes in subsequent actions. Furthermore, a feed-forward network model that simulated MF-to-PC transformations revealed that amplification and restructuring of the lesser variability in the MF activity is a pivotal circuit mechanism. Therefore, the flexible control of movements by the cerebellum crucially depends on its capacity for multi-dimensional computations.


221
Trials with CS activity (i.e., 'CS-trials') within the post-saccadic period of 50-140 ms are report error in the opposite direction, regardless of whether these errors actually occur or not.

225
We did this for every PC and combined all trials (with and without CSs) that reported outward

Fitting the linear rate model to data
In the main text, we modeled the firing rate vector of a "pseudo-population" containing N number of neurons, where 0 and ∂ are the kinematics-independent and dependent part, respectively. = − 0 is the 317 deviation of z from the mean value of z, 0 .

318
We fitted this model to the MF and PC data in the following way: We first estimated the firing rate at each trial by the fractional interspike interval method 1 ( =5 ms). Other estimation methods, such as spike train

Dimensionality Reduction by Principal Component Analysis with Perturbation
We developed a simple variant of principal component analysis (PCA) to perform dimensionality reduction of the population firing rate model. We first assume that, with any specific kinematic parameters (in the range of the population firing. Then, we find an approximation of the population firing and its change with kinematic 356 parameters by another dynamical process with lower dimensionality. Our goal is that if we perform PCA on this 357 approximated population firing, the result will be sufficiently close to that of the original population firing for 358 any kinematic parameters. In our experimental data, the trial-to-trial variabilities of kinematic parameters and 359 firing rate are relatively small. In this case, we can use the matrix perturbation theory to find such an 360 approximation of the population firing.

361
Our result can be summarized by Equation 2  366 e,f) and ∂ is sufficiently small. Furthermore, when we perform PCA on , the result is dominated by the first 367 term since ⊥ makes a negligible contribution (see below). Therefore, we performed most of our manifold 368 analysis in terms of + ∑ ∂ (Supplementary figure 5c,d and g,h), without considering ⊥ . An 369 exception is the dimensionally reduced MF firings given to the linear feed-forward network as inputs (Figure 6).

370
Here we use the full Eq. S1 for the approximate firing rates of individual neurons.
Finally, by using the eigenvector perturbation in Eq. S2, we determined the rest of the components in Eq. S1, In the main text, we compared manifolds obtained from two or more different data sets, such as firings recorded 427 with the left-directed saccades and those with the right-directed ones. We used canonical correlation analysis

428
(CCA) to verify if a pair of manifolds can be related by a linear transformation 7-9 . We first used the CCA for the 429 kinematics-independent components of two data sets to find the best linear alignment transformation between 430 them. Then, we transformed the kinematics-dependent parts by the alignment transform and checked how well 431 they match with each other.

432
If there are two data called A and B, we denote the kinematics-independent components ( in Eq. S1) of their 433 manifolds, and , respectively. Also, we discretize time and consider them as × matrices instead of

452
We assume that p(z) is approximately the Gaussian distribution with zero mean. Then, we get the error function

455
In evaluating the performance, we measured the total variability by replacing the prediction ( , ∂ ) in