Mapping functional brain networks from the structural connectome: Relating the series expansion and eigenmode approaches

Functional brain networks are shaped and constrained by the underlying structural network. However, functional networks are not merely a one-to-one reflection of the structural network. Several theories have been put forward to understand the relationship between structural and functional networks. However, it remains unclear how these theories can be unified. Two existing recent theories state that 1) functional networks can be explained by all possible walks in the structural network, which we will refer to as the series expansion approach, and 2) functional networks can be explained by a weighted combination of the eigenmodes of the structural network, the so-called eigenmode approach. To elucidate the unique or common explanatory power of these approaches to estimate functional networks from the structural network, we analysed the relationship between these two existing views. Using linear algebra, we first show that the eigenmode approach can be written in terms of the series expansion approach, i.e., walks on the structural network associated with different hop counts correspond to different weightings of the eigenvectors of this network. Second, we provide explicit expressions for the coefficients for both the eigenmode and series expansion approach. These theoretical results were verified by empirical data from Diffusion Tensor Imaging (DTI) and functional Magnetic Resonance Imaging (fMRI), demonstrating a strong correlation between the mappings based on both approaches. Third, we analytically and empirically demonstrate that the fit of the eigenmode approach to measured structural data is always at least as good as the fit of the series expansion approach, and that errors in the structural data lead to large errors of the estimated coefficients for the series expansion approach. Results hold for eigenmodes of the weighted adjacency matrices as well as eigenmodes of the graph Laplacian. Therefore, we argue that the eigenmode approach should be preferred over the series expansion approach. Taken together, these results provide an important step towards unification of existing theories regarding the structure-function relationships in brain networks.

Functional brain networks are shaped and constrained by the underlying structural network. However, functional networks are not merely a one-to-one reflection of the structural network. Several theories have been put forward to understand the relationship between structural and functional networks. However, it remains unclear how these theories can be unified. Two existing recent theories state that 1) functional networks can be explained by all possible walks in the structural network, which we will refer to as the series expansion approach, and 2) functional networks can be explained by a weighted combination of the eigenmodes of the structural network, the so-called eigenmode approach. To elucidate the unique or common explanatory power of these approaches to estimate functional networks from the structural network, we analysed the relationship between these two existing views. Using linear algebra, we first show that the eigenmode approach can be written in terms of the series expansion approach, i.e., walks on the structural network associated with different hop counts correspond to different weightings of the eigenvectors of this network. Second, we provide explicit expressions for the coefficients for both the eigenmode and series expansion approach. These theoretical results were verified by empirical data from Diffusion Tensor Imaging (DTI) and functional Magnetic Resonance Imaging (fMRI), demonstrating a strong correlation between the mappings based on both approaches. Third, we analytically and empirically demonstrate that the fit of the eigenmode approach to measured functional data is always at least as good as the fit of the series expansion approach, and that errors in the structural data lead to large errors of the estimated coefficients for the series expansion approach. Therefore, we argue that the eigenmode approach should be preferred over the series expansion approach. Results hold for eigenmodes of the weighted adjacency matrices as well as eigenmodes of the graph Laplacian. Taken together, these results provide an important step towards unification of existing theories regarding the structure-function relationships in brain networks.

Introduction
For many years, structural and functional brain networks have been studied independently (Bullmore and Sporns, 2012;Sotiropoulos et al., 2013), revealing partly overlapping and partly divergent connectivity patterns. The last two decades have brought a wealth of studies that specifically aimed to elucidate the relationship between the two (Wang et al., 2015;Fjell et al., 2017;Avena-Koenigsberger et al., 2018;Honey et al., 2009;Greicius et al., 2009). Structural networks are believed to shape and provide constraints for the dynamics of functional connectivity, which can be measured at different time-scales (O'Neill et al., 2017). Given this stance, it has been widely acknowledged that, to some extent, functional networks can be predicted from the underlying structural connectome (Rosenthal et al., 2018;Mi si c et al., 2016;Honey et al., 2007). However, local neuronal dynamics can impact the emergence of functional connectivity, especially on shorter time scales (Forrester et al., 2019;Ton et al., 2014), rendering a direct mapping between structural and functional networks limited to the domain where functional connectivity is estimated at longer time scales.
A vital observation is that functional networks are not merely a oneto-one reflection of the underlying structural network (Honey et al., 2009), which motivated the search for explanations. Apart from the direct connections within the structural network, additional properties of the structural network were also found to shape the functional networks. These influencing properties include the Euclidean distance between regions , the outer product of the structural degree sequence (Tewarie et al., 2014;Stam et al., 2016), detours along the shortest paths in the structural network (Goñi et al., 2014), and diffusion properties of the structural network (Kuceyeski et al., 2016;Abdelnour et al., 2014). Also approaches using coupled neural mass models have been successful in explaining the emergence of resting state functional networks from the underlying structural network, with the prevalent view that resting state functional networks appear if the underlying system operates in a metastable 2 regime (Deco et al., 2017b;Deco and Kringelbach, 2016;Wens et al., 2019) or, as also assumed, in a multistable 3 regime (Deco et al., 2013(Deco et al., , 2009Deco and Jirsa, 2012;. Several studies have aimed to formalise the mapping between structural and functional networks (Robinson, 2012;Meier et al., 2016;Saggio et al., 2016;Deco et al., 2014). Since the seminal work by Robinson (2012), several groups have independently demonstrated that functional connectivity can be represented in terms of the sum of all possible walks on the underlying structural network (Robinson, 2012;Meier et al., 2016;Bettinardi et al., 2017;Mehta-Pandejee et al., 2017;Robinson et al., 2014;Becker et al., 2018;Gilson et al., 2018), which can also be understood in terms of flow equations or propagator theory on the network (Van Mieghem et al., 2017;Robinson, 2012). The elegance of this approach is that it also incorporates other concepts, such as, for example, the importance of shortest paths and detours from these paths (Goñi et al., 2014), as well as indirect paths of length two (V ertes et al., 2012), for the formation of functional networks. One of the ways to describe this 'mapping' approach between structure and function is to express the relationship as a series expansion .
Recent years have also seen a wealth of explorations by several independent groups of the eigenmode 4 approach, in which eigenvectors of the structural network are believed to form a basis-set to explain functional networks (Atasoy et al., 2016;Robinson et al., 2016;Wang et al., 2017;Abdelnour et al., 2018;Gabay et al., 2018). Several of these structural eigenmodes have been related to known functional subnetworks (Atasoy et al., 2016), and combinations of eigenmodes were able to explain the occurrence of frequency-specific functional networks . This approach has also found its way to applications in neuroscience and was able to detect alterations in brain states during sleep (Tokariev et al., 2019). Most of these studies extract the eigenmodes from the Laplacian 5 of the structural connectome (Abdelnour et al., 2018;Atasoy et al., 2016), or even from the Laplacian of structural connectivity along the cortical surface (Gabay et al., 2017;Robinson et al., 2016), although eigenmodes of the weighted adjacency matrix itself have also been used . We refer the reader to (Van Mieghem, 2016) for a theoretical underpinning of the eigenmode approach for the Laplacian.
Instead of exploring an abundance of unrelated or complementary theories simultaneously, we strive for unification of theories regarding the relationship between the structural and functional brain networks. Given the robustness and generalizability of the series expansion and eigenmode approaches (which have been applied to different datasets by independent groups), we aim to understand the theoretical link between the eigenmode and series expansion approaches. The theoretical framework to prove the equivalency between the series expansion and eigenmode approach has recently been put forward in (Robinson, 2019). The author demonstrated that the series approach can be formulated in terms of the spectral approach, and demonstrated the same mapping between structural and functional brain networks for the topological and spectral domain. Here we take this notion further, we first illustrate a theoretical link between the eigenmode and series expansion approach model coefficients. We then analyse the strength of this relationship in empirical data from Diffusion Tensor Imaging (DTI) and resting-state functional Magnetic Resonance Imaging (fMRI). We subsequently analyse both approaches in terms of an optimization problem by comparing analytically derived expressions for their goodness of fit, followed by a comparison of the goodness of fit in empirical data. Since empirical data lacks ground truth, we also verify the link between eigenmode and series expansion approaches in neural mass based network simulations. In addition, we also analyse whether eigenvectors of the structural and functional networks in empirical data are indeed similar.

Theoretical link between series expansion and eigenmode approaches
We consider structural and functional networks, which can be described by a set of nodes and links, where link weights are described by the corresponding connectivity matrices. We consider the structural connectivity matrix A 2 R NÂN , and the functional connectivity matrix W 2 R NÂN , where N denotes the number of nodes (regions) in both networks. Both matrices are symmetric with zeros along the diagonal. Here, for the eigenmode approach, we consider the eigenvector matrix V of A, where each column corresponds to one of the eigenvectors of A, rather than eigenvectors of the Laplacian of the structural network (Van Mieghem, 2010) (see section Extension of theoretical and numerical comparisons using the eigenmodes of the graph Laplacian below for an analysis in terms of the graph Laplacian). The eigenmode approach assumes the following: where S is a diagonal matrix. Elements s i on the diagonal of S, i ¼ 1;…;N, correspond to the weighting coefficients that can be estimated from empirical data  or can be derived analytically (see section below on Fitting coefficients of the eigenmode and series expansion approaches to experimental data). The assumption in the eigenmode approach is that a functional network can be partly explained as a weighted linear combination of the eigenvectors of the structural network. Previous work has demonstrated that besides this linear combination, a non-linear combination of the eigenvectors does significantly contribute to the prediction of functional networks . However, the linear combination could by itself already explain a large proportion of the explained variance of frequency-specific functional networks . In the series approach, the functional connectivity matrix is expressed as a Taylor series expansion of the structural matrix, with unknown coefficients c m , m ¼ 1; …; d, and can be reduced to (see equation (4) in ) where d is the diameter (length of the longest shortest path) of the structural network. Dividing the coefficients c m by the 2-norm k…k 2 is done since powers of A diverge quickly for increase in m. The series is truncated at m ¼ d, as previous work has demonstrated that the fit with empirical fMRI or magnetoencephalography (MEG) data converges for this value . However, the precise underlying phenomenon why the diameter seems a sufficient bound is a graph 3 Multistability refers to the co-existence of multiple attractors. 4 An eigenmode refers to the eigenvalue and corresponding eigenvector of the weighted adjacency matrix or graph Laplacian. 5 Laplacian is defined as diagonal matrix with the degrees on the diagonal minus the adjacency matrix.
theoretical open question. A normalization by m! instead of kA m k 2 has also been used in literature (Gilson et al., 2018), which would transform equation (2) into an expression of communicability if d is changed to infinity (Estrada and Hatano, 2008). The relationship between the two definitions or normalisations can be readily understood using the Borel transform (Borel, 1899). Powers of A for unweighted networks correspond to configurations of walks in the structural network with length m. The Taylor series coefficients c m can be estimated from the empirical data or analytically derived (see section below on Fitting coefficients of the eigenmode and series expansion approaches to experimental data). Since the matrix A is symmetric, we can diagonalise A to obtain its real eigenvalues λ i and eigenvectors v i , where i ¼ 1;…;N. The eigenvectors are orthogonal, hence, A m ¼ VD m V T holds and we can rewrite equation (2) as where D ¼ diagðλ 1 ; λ 2 ; …; λ N Þ. The series approach thus states that for every addend in equation (3), there is a different weighting of the eigenvectors, which is determined by the powers of the eigenvalues λ i and the Taylor coefficients c m . The series and eigenmode approach are thus equivalent (or strongly related) if the weighting coefficients on the diagonal of S for the eigenmodes are (approximately) equal to the series coefficients: Despite the fact that a relationship between the two approaches can be readily demonstrated using linear algebra, it remains to be elucidated whether the coefficients as estimated from empirical data are indeed similar for both approaches.

Common eigenvectors between structural and functional brain networks
An exact spectral mapping would indicate that the eigenvectors of the structural and functional networks would be identical, i.e., one would have a basis that simultaneously diagonalizes both structural and functional connectomes (Estrada and Hatano, 2008). A necessary condition for a simultaneous diagonalisation is the commutation of both structural and functional adjacency matrices ½A;W ¼ 0, with ½A;W ¼ AW À WA. Since, we would like to analyse how close the commutator is relative to A and W respectively, we evaluate i) kAW À WAk F = A 2 F and ii) kAW À WAk F = W 2 F (Higham, 2008). Here kAk F and k Wk F denote the Frobenius norm of matrix A and W, respectively. The closer i) is to zero, the better is the approximation that a basis of W would span A. Similarly, the closer ii) to zero, the better is the approximation that a basis of A would span W. If both i) and ii) are close to zero, then this indicates that a simultaneous basis of both matrices spans each individual matrix. Table 1 shows the normalised commutators for empirical and simulated data.

Fitting coefficients of the eigenmode and series expansion approaches to experimental data
The fitting error of the eigenmode approach depends on the eigenmode coefficients s 1 ; …; s N and follows from equation (1) as The smaller the error ε eigen ðSÞ, the better is the fit of the eigenmode approach. In other words, a smaller ε eigen ðSÞ corresponds to a higher explanatory power of the eigenmode approach. Hence, to obtain the relationship between the structural connectivity matrix A and the functional connectivity matrix W, we aim to find the diagonal matrix S that minimises the error ε eigen ðSÞ. Note that W refers to the empirical or simulated functional connectivity matrix. Lemma 1 gives an explicit expression for the diagonal matrix S.
Lemma 1. The diagonal matrix S that minimises the fitting error ε eigen ðSÞ The proof can be found in Appendix A.
The fitting error of the series expansion approach depends on the coefficient vector c and follows from equation (3) as Similarly as for the eigenmode approach, the smaller ε series ðcÞ, the better is the fit and the higher the explanatory power of the series expansion approach. We now aim to find the coefficient vector c that minimises the error ε series ðcÞ. Similarly to Lemma 1, we can derive an expression for the coefficient vector c of the series expansion approach.
Lemma 2. Determining the coefficient vector c that minimises the fitting error ε series ðcÞ is equivalent to solving the linear least-squares problem min c z À c T P 2 2 : Here, the 1 Â N vector z equals and the d Â N matrix P is a Vandermonde matrix given by Table 1 Statistics of the intraclass correlation coefficient is displayed alongside the square of the correlation between the two estimated connectivity matrices from the two approaches (R 2 ), the diameter of the structural network (longest shortest path), the Frobenius norm of the commutator of A and W, and the squared condition number κ 21 (P). The diameter of the structural networks was obtained after binarizing their corresponding weighted adjacency matrices. The proof can be found in Appendix B. Provided that all eigenvalues λ 1 ; …; λ N are distinct, we obtain from equation (8) that the coefficient vector c is given by (Boyd and Vandenberghe, 2018) To compute the coefficient vector c in practice, there are numerically stable methods, such as the Matlab command mldivide that solve the least-squares problem in equation (8) without computing the inverse as in equation (9).
We would like to stress that Lemma 1 and 2 follow from basic linear algebra concepts and should not be considered as new theoretical results. Here, we present these results within the context of mappings between structural and functional networks in the field of neuroimaging. In addition to ε eigen ðSÞ and ε series ðcÞ as goodness of fit measure, we also computed the Pearson correlation coefficient between predicted and actual functional connectivity matrices to allow for better interpretability for readers who are more familiar with this statistical measure.

Numerical errors for the series expansion approach
Suppose the functional connectivity matrix W contains small estimation or measurement errors, hence the perturbed matrixW ¼ W þ ΔW, for some small error matrix ΔW 2 R NÂN . Due to the perturbed matrixW, the coefficients that minimise equation (8) are also perturbed asc ¼ c þ Δc, where we denote the d Â 1 coefficient error vector by Δc ¼ ðΔc 1 ; …; Δc d Þ T . We are interested in the sensitivity of the series approach to errors of ΔW. More precisely, we are interested in the question: How great is the impact of small errors ΔW on the error Δc of the series coefficients?
The condition number of the Vandermonde matrix P expressed in Lemma 2 equals (Van Loan and Golub, 2012) where σ 1 and σ d denote the largest and smallest singular value of the matrix P, respectively. A small error ΔW results in an error Δc. on the series coefficients that scales with the square of the condition number κðPÞ (Van Loan and Golub, 2012) To be more precise, the left-hand side of Equation (11) is upperbounded by some constant multiplied by the right-hand side of equation (13) (Van Loan and Golub, 2012). Hence, the square of the condition number κðPÞ determines the sensitivity of the computed series coefficients c to errors ΔW. We emphasise that this sensitivity is an inherent property of the task of computing the coefficients c 1 ; …; c d and does not depend on the specific numerical method that is employed to solve equation (8).
It is known from literature that a Vandermonde matrix P can have large condition numbers (Pan, 2016). The squared condition numbers κ 2 ðPÞ for structural connectivity matrices A used in dataset one to four is illustrated in the rightmost column of Table 1. Large κ 2 ðPÞ are reported, indicating that small errors in W can have very large impact on the error in c. Likewise, measurement errors ΔA on the structural connectivity matrix A have a similar effect on the error of c. Furthermore, we emphasise that the structural and functional connectivity matrices A and W are digitally stored and processed with finite-precision arithmetic, which can result in large errors of c when κ 2 ðPÞ is large -even if the matrices A and W themselves had been measured with perfect accuracy.

Comparing the eigenmode and series expansion approach
Similarity between the series and eigenmode approaches would entail that the eigenmode coefficients s 1 ; …; s N would be approximately equal to the series coefficients c m multiplied by the powers of the eigenvalues of A (right hand side of Equation (4)). In order to quantify this similarity for empirical and simulated data, we computed the intraclass correlation coefficient between the diagonal of the left-hand and the diagonal of the right-hand side of Equation (4), i.e. computing the intraclass correlation between the respective coefficients. The intraclass correlation coefficient can be considered as a more rigorous method to quantify whether the methods capture the same underlying link between structural and functional brain networks than conventional Pearson or Spearman correlations. In addition, we also quantified the similarity in estimated functional network connectivity between the two approaches in terms of the R 2 , i.e. by applying a linear least-squared regression between estimated connectivity from the series expansion and eigenmode approach. To this end, we vectorised the upper triangle of the estimated functional connectivity matrix from both approaches and applied the linear leastsquared regression to test the link-wise dependency between approaches (see Table 1 for results). In addition to this, we also compared the fitting errors of both approaches (ε eigen ðSÞ and ε series ðcÞ). Moreover, we analytically show that the fitting error of the eigenmode approach is smaller than or equal to the fit of the series approach: Lemma 3. For every structural connectivity matrix A and every functional connectivity matrix W, the optimal fit of the eigenmode approach is at least as good as the optimal fit of the series approach. More precisely, it holds min s ε eigen ðSÞ min c ε series ðcÞ: The proof can be found in Appendix C. Furthermore, as shown in Appendix C, equation (12) holds true almost always with strict inequality. In other words, Lemma 3 states that the series approach never explains the experimental data better than the eigenmode approach. Thus, the eigenmode approach is a more accurate model for the relationship between the structural connectivity matrix A and the functional connectivity matrix W.

Application of both approaches to empirical and simulated networks
We used multimodal data (DTI and resting-state fMRI data) from four previously published empirical datasets. The first dataset (Dataset 1) consisted of a literature-based structural network for 80 healthy subjects (Gong et al., 2008) combined with a group-averaged functional network based on fMRI data from 21 healthy adults, and making use of the automated anatomical labelling atlas (AAL) (cortical nodes, N ¼ 78) (Tzourio-Mazoyer et al., 2002). This fMRI dataset has previously been analysed by our group using the series approach Tewarie et al., 2014). The second and third datasets were retrieved from the University of California Los Angeles (UCLA) multimodal connectivity database . The second dataset (Dataset 2) consisted of a group-averaged structural network and a group-averaged resting-state fMRI network from 381 healthy adults from the Nathan Kline Institute (NKI)/Rockland Sample (Nooner et al., 2012). For this dataset, the Craddock atlas was used (N ¼ 188) (Craddock et al., 2012). The third dataset (Dataset 3) consisted of a group-averaged structural and resting state-fMRI network obtained from 79 typically developing children from a UCLA autism dataset . For this dataset, the Power atlas was used (N ¼ 264) (Power et al., 2011). We refer to these cited papers for information about pre-processing pipelines. The fourth data set consisted of a group-averaged structural and resting-state fMRI network (N ¼ 78) from 10 healthy subjects from the human connectome 2 Metastability refers to the dwelling tendency of trajectories in phase space without convergence to an attractor (attractor refers to subspace in phase space to which a trajectory converges). data (Van Essen et al., 2013), again making use of the AAL atlas. Processing pipelines for the structural connectome can be found in , processing pipeline for the resting-state fMRI network can be found in Appendix D. The functional connectivity matrix W for all datasets was obtained by computing the Pearson correlation coefficient between fMRI timecourses. The eigenvalues and eigenvectors of all structural connectivity matrices were obtained using the function eig.m in MATLAB (version R2018b; Mathworks, Massachusetts, USA).
Given a lack of ground truth with empirical functional connectivity data, we also simulated functional networks using a neural mass model (Wilson and Cowan, 1972). The Wilson-Cowan neural mass model for every node consists of two distinct neuronal populations, an excitatory and an inhibitory neuronal population. The dynamics for each node j are characterised in terms of their mean firing rates (E j ðtÞ ¼ excitatory, Y j ðtÞ ¼ inhibitory). The sum of all inputs to a population is converted using a sigmoid function gðxÞ ¼ ð1 þ expðÀxÞÞ À1 , with threshold θ a (with a 2 fe; ygÞ. The mean firing rate for every node can be understood by the following dynamical system: The parameters c ab (with a 2 fy; eg and b 2 fy;eg) correspond to strength of the connections between the populations, τ (in s) to a relaxation time constant, and ξ to Gaussian noise with zero mean and unit variance. The incoming firing rates from distant nodes are tuned by the global coupling strength parameter η and incoming firing rates E i ðtÞ are delayed by a Euclidean distance dependent delay τ ji . Nodes undergo saddle node and Hopf bifurcations when the external input parameter P ext is tuned (see Fig. 5B in ). The working point for the model was near the Hopf bifurcation since this regime seemed to be consistent with empirical data (Deco et al., 2017a;. Time series of E j ðtÞ were used as the model output and model parameters were identical as in . Equation (13) was numerically solved using a 4th order Runge-Kutta scheme with a sufficiently small time step (Δt ¼ 1 Â 10 À4 s) (Lemar echal et al., 2018). The structural network from dataset 1 was used as input for A in the simulations. Fig. 1 shows the empirical functional connectivity matrix and the estimated functional connectivity matrices using the series expansion and eigenmode approaches for all four empirical datasets and the simulations. Estimations are based on the analytical expressions described in equations (6) and (9). For every dataset, it also shows a scatter plot of the magnitude of the eigenmode coefficients s i (diagonal of S), i.e. weights for the different eigenvectors of the structural network, versus the aggregated coefficients for the series expansion approach, i.e. the sum of the powers of the eigenvalues of the structural network weighted by cm kA m k (right-hand side of equation (4)). The number of points in the scatter plots correspond to the number of eigenvalues of the structural network, which is equal to N. Intraclass correlation coefficients are shown in each scatter plot. For all four independent datasets, with different network sizes N, there were strong and significant correlations R between the coefficients from the eigenmode and the series approach (see also Table 1). This strong correlation between the two approaches was also the case for the Wilson-Cowan based neural mass simulations. This high similarity between the two approaches was also reflected in the observation that the estimated connectivity matrices were highly correlated (see Table 1 column 7 for the R 2 ). Note that ε eigen ðSÞ < ε series ðcÞ for all cases, in agreement with Lemma 3.

Extension of theoretical and numerical comparisons using the eigenmodes of the graph Laplacian
So far, we have analysed the eigenmodes of the structural connectivity matrix A in relation to the series expansion approach. In contrast, other studies focused on the eigenmodes of the graph Laplacian (Atasoy et al., 2016;Abdelnour et al., 2018). Analysis of the graph Laplacian is attractive due to well-behaved properties, such as the boundedness of its eigenvalues (Van Mieghem, 2010). The graph Laplacian is defined as Q A ¼ K A À A, where K A ¼ diagðAxÞ refers to the degree matrix with x being the all-one vector. We further normalised the graph Laplacian to allow for comparison with (Abdelnour et al., 2018). In the same way we computed the normalised graph Laplacian Q Ws of the functional connectivity matrix W. We denoted the eigenvalues of the Laplacians Q As and Q Ws by μ 1 ; …; μ N and γ 1 ; …; γ N , respectively, and decomposed the Laplacians as Q As ¼ Udiagðμ 1 ; …; μ N ÞU T and Q Ws ¼ Zdiagðγ 1 ; …; γ N ÞZ T . Here, the columns of the matrices U and Z are the eigenvectors of the respective Laplacians. We then estimated the coefficients for the eigenmode approach in the same way as described above. Furthermore, below we also provide a comparison to the approach by Abdelnour and colleagues (Abdelnour et al., 2018), which is based on an exponential relationship between eigenvalues of the structural and functional connectivity matrices.
First, we determine the fitting errors for Taylor expansion and eigenmode approach when using the graph Laplacian. For the Taylor expansion approach, since A ¼ K A À Q A ; we can rewrite equation (2) as Since equation (14) is equivalent to equation (2), the coefficients c m and the fitting error ε series ðcÞ remain unaltered.
Similarly as in equation (1), the claim for the eigenmode approach is that the graph Laplacian Q Ws of the functional connectivity matrix W can be approximated by a linear combination of the eigenvectors U of the graph Laplacian of the structural network Q Ws % UPU T ; where diagonal matrix P corresponds to the weighting coefficients. In the same way as for the eigenmode approach for A, we define the fitting error ε Q ðPÞ ¼ Q Ws À UPU T F : By using the same reasoning as for ε eigen ðSÞ (i.e. Lemma 1), the diagonal matrix that minimises ε Q ðPÞ equals By definition Q W ¼ K W À W, the estimated functional connectivity matrix can subsequently be obtained by evaluating where K W refers to a diagonal matrix with the degrees of W on the diagonal.
If there is a link between the series expansion and Laplacian eigenmode approaches, we can equate equations (14) and (18) and solve for P Unlike equation (4), the eigenvectors U cannot be eliminated on the right-hand side of equation (19). In addition to equation (18), previous work has expressed W in terms of an exponential relationship between the eigenvalues of the structural graph Laplacian and the functional connectivity matrix (Abdelnour et al., Fig. 1. Series expansion versus eigenmode approach (weighted adjacency matrix): Empirical fMRI connectivity for all four independent datasets are illustrated (A-D), together with results from neural mass simulations (E). The estimated functional connectivity matrices for the series expansion and eigenmode approach are also displayed, alongside the error, expressed as ε eigen ðSÞ or ε series ðcÞ, and the Pearson correlation R between predicted and actual connectivity matrices. The scatter plot on the right shows the estimated coefficients for the eigenmodes, s i and the diagonal elements of the right hand side of equation (4), here called aggregated series coefficients. A strong and significant correlation was found for all datasets.

2018)
where u i is the ith column of U. The coefficients a, α and b are found using a non-linear minimisation method (Abdelnour et al., 2018). The fitting error for this methods equals Fig. 2 shows the actual and estimated functional connectivity matrices using the graph Laplacian eigenmode approaches for all four empirical datasets and the simulations. The second column shows the estimated functional connectivity matrices based on equation (20), while the third column shows the estimated functional connectivity matrices based on equation (18). Both graph Laplacian approaches are able to predict the actual functional connectivity matrices. Correlations with the actual functional connectivity based on predictions from equation (18) (i.e. graph Laplacian) seem to be the strongest (compare 3rd column in Fig. 2 with 3rd column in Figs. 1 and 2nd column in Fig. 2). The diagonal elements of the right and left-hand side of equation (19) are depicted in the scatter plots in Fig. 2 (i.e. the potential link between the series expansion approach and graph Laplacian eigenmode approach). These scatter plots show for both the empirical data and neural mass simulations strong correlations between the series expansion and graph Laplacian eigenmode approach.

Discussion
Recent years have seen a rise in studies that make use of the approach that functional networks can be partly understood in terms of the eigenmodes of the structural network (Atasoy et al., 2016;Robinson et al., 2016;Wang et al., 2017;Abdelnour et al., 2018). Another existing theory is that functional networks emerge from the weighted sum of all possible walks in the structural network, an idea that was inspired by the use of path integrals in quantum mechanics (Robinson, 2012). Here, we have shown by simple linear algebra that the weighted sum of all possible walks corresponds to different weightings of the eigenmodes of the structural network, i.e. both approaches are theoretically strongly related. This was true for eigenmodes based on the weighted adjacency matrix as well as eigenmodes based on the graph Laplacian. We also derived expressions for the weighting coefficients that are used for both approaches. The theoretical correspondence between the two approaches was verified in four independent empirical datasets and in neural mass simulations. As expected, both approaches identified very similar mappings between structural and functional networks, encouraging the search for unifying theories of structural-functional network relationships in the future. However, analytical results further showed that the errors made by the eigenmode approach are always smaller or equal to the error of the series expansion approach. In addition, due to the large condition number of the Vandermonde matrix, measurement and estimation errors of, as well as finite-precision in the storage of, and computations with the functional and structural connectivity matrices, can lead to large errors for the series approach. Indeed, for both experimental and simulated data the fit of the eigenmode approach was better than for the series approach. Taken together, these findings advocate the use of the eigenmode approach.
A few aspects of the results warrant further discussion. First, for both approaches, there was a larger fitting error for the larger networks (datasets 2 and 3) when compared to using a smaller network. This dimension dependence could be related to the fact that both publically available fMRI networks were provided after a threshold had been applied, resulting in many zeros in the matrix, which are difficult to estimate. In addition, a threshold could have led to a change in its basis.
Note that the difference in errors for the series approach between larger and smaller networks could not be explained by a difference in truncation of the series, as the diameter for all networks was approximately equal (five or six). If we assume that structural networks have small-world characteristics (Yan et al., 2010), then similarity in the diameter of the different structural networks could be explained by the reasoning that the path length between two randomly chosen nodes in networks scale with logðNÞ. Second, making use of simulated data clearly illustrated that in a system with ground truth structural connectivity, the emerging functional connectivity can be adequately estimated using the series expansion and eigenmode approaches with lower errors compared to the application of the approaches to empirical fMRI data. Third, eigenmodes of the weighted adjacency matrix and eigenmodes of the graph Laplacian were both strongly related to the series expansions approach. The goodness of fit with actual functional connectivity data was slightly higher for the graph Laplacian approach. However, since analysis was performed at the group level we could not perform statistics to compare two distributions of the goodness of fit (e.g. graph Laplacian versus weighted adjacency matrix) in order to conclude that predictions based on the eigenmodes of the graph Laplacian outperform predictions based on the eigenmodes of the weighted adjacency matrix.
A few aspects of the methodology warrant further discussion. First, we gave explicit expressions for the coefficients used by the eigenmode approach and by the series expansion approach. These coefficients can be computed efficiently, which avoids the usage of fitting algorithms, such as the Matlab function nlinfit used in previous work . Second, in our initial work , we argued that the functional network may be a function of the structural network: W ¼ f ðAÞ. If this function is analytic, it can be approximated by a Taylor series. A Taylor series in the topology domain (at the level of A) would also translate to the spectral domain, where the same function would map the eigenvalues of the structural network to the functional network. This correspondence would also assume identical eigenvectors for the structural and functional network. Our results showed that the commutator of A and W was close, but not equal to zero, hence the estimated structural and functional networks were not simultaneously diagonalizable and therefore did not share the exact same eigenvectors. However, as the normalised commutator were close to zero, the functional and structural connectivity matrices were almost jointly diagonalizable, and they had similar eigenvectors (Glashoff and Bronstein, 2013). The most important reasons for this mismatch between the eigenvectors of structural and functional networks may be 1) systematic and random errors in the estimation of structural and functional networks (Sotiropoulos and Zalesky, 2019;Noble et al., 2019); 2), the function f being dependent on other state variables such as local or nodal dynamics (Harush and Barzel, 2017;Deco et al., 2014;Forrester et al., 2019). However, given the significant contribution of (weighted) powers of the structural network to explain functional connectivity matrices in comparison with surrogate data , and given the plausible explanation that indirect routes on the structural network could indeed contribute to functional connectivity (Honey et al., 2009), we may not have to refute the notion of an analytical function between structural and functional networks W ¼ f ðAÞ, with A being the only argument. Instead, we may have to attenuate/relax the statement of W ¼ f ðAÞ, and restate that possible walks on the structural network contribute to the explanation of the functional network. Another remark with respect to these approaches is that they assume that the networks are symmetric. However, especially (directed) functional connectivity matrices can be asymmetric. In addition, there is evidence for asymmetries in structural networks (Oh et al., 2014), and further work will be required to analyse the use of mapping methods for asymmetric networks. Lastly, we would like to make the reader aware that we have presented a correspondence with the series expansion approach using eigenmodes of the structural connectivity matrix and the Laplacian of the structural connectivity matrix. This use of eigenmodes should not be confused with other uses of eigenmodes, which can be estimated from the cortical surface (Gabay et al., 2017;  (20)) and based on minimising the fitting error (equation (16)), alongside the errors, expressed as ε exp ða; α; bÞ or ε Q ðPÞ, and the Pearson correlation R between predicted and actual connectivity matrices. The scatter plot on the right shows the estimated coefficients for the eigenmodes, p i and the diagonal elements of the right hand side of equation (19), here called aggregated series coefficients. A strong and significant correlation was found for all datasets. Robinson et al., 2016), or for the Laplacian of the cortical surface and structural connectivity combined (Atasoy et al., 2016).
Having said this, we have demonstrated that the series expansion and eigenmode approach identify very similar mappings between structural and functional networks and verified this for the first time in empirical data, encouraging the search for unification of theories for structuralfunctional network relationships. However, given the numerical hurdle of the sensitivity of the series expansion approach to small errors in connectivity data, the eigenmode approach should be preferred over the series approach.
CRediT authorship contribution statement We derive the S matrix that minimises the error ε eigen ðSÞ. The Frobenius norm of an N Â N matrix M is defined as (Van Mieghem, 2010) For a symmetric matrix M, the Frobenius norm can be expressed in terms of the eigenvalues λ i ðMÞ of the matrix M as Thus, the Frobenius norm of a symmetric matrix M is completely determined by the eigenvalues of M. For any orthogonal matrix B and any symmetric matrix M ; the matrixM ¼ BMB T is symmetric and has the same set of eigenvalues as the matrix M: Hence, the Frobenius norm of the matrices M andM is equal, i.e., for any symmetric matrix M and any orthogonal matrix B.
In particular, by identifying M ¼ ðW ÀVSV T Þ and B ¼ V T , the definition of the error ε eigen ðSÞ by equation (5) is equivalent to where the last equality follows from the orthogonality of the matrix V, hence V T V ¼ I. Minimising the non-negative error ε eigen ðSÞ is equivalent to minimising its square ε 2 eigen ðSÞ. With the definition of the Frobenius norm, equation (A1), we obtain ε 2 where the last equality follows from the fact that the matrix S is diagonal. The second addend in equation (A2) does not depend on the matrix S. Hence, minimising the error ε eigen ðSÞ is equivalent to minimising ε eigen ðSÞ ¼ which is minimal if v T i Wv i ¼ s i for all nodes i ¼ 1; …; N. Note that ðV T WVÞ ii and v T i Wv i are equal, which follows from writing out the matrix product V T WV and identifying the eigenvectors v i as the columns of the matrix V ¼ ðv i ; …; v N Þ.
Thus, the matrix S that results in the best fit of the eigenmode approach equals We derive the c m coefficients that minimise the error ε series ðcÞ. The coefficients c m can be obtained by minimising the difference of the left-hand side to the right-hand side in equation (3). Since the matrix A is symmetric, it holds A similar calculation as in Appendix A yields that minimising the error is equivalent to minimising Rewriting equation (A6) in terms of the vector z and the matrix P completes the proof of Lemma 2.

C. Proof of Lemma 3
We prove Lemma 3 by two steps. As the first step, we rewrite the error ε series ðcÞ of the series approach. From equations (A4) and (A5), we obtain that the error ε series ðcÞ of the series approach can be expressed, for any coefficients c, as where the last equality follows from equation (A7). Thus, for any coefficients c, the choice (A8) for the eigenmode coefficients s 1 ; …; s N results in ε eigen ðYÞ ¼ ε series ðcÞ. Hence, we obtain that min S ε eigen ðSÞ ε eigen ðYÞ ¼ min c ε series ðcÞ; (A9) which proves Lemma 3. Please note that the argument used is analogous to the classical linear algebra idea used to compute the remainder term (and the associated error) of a Taylor polynomial of a matrix function (Deadman and Relton, 2016). We emphasise that (A9) almost always holds true with strict inequality, i.e., min S ε eigen ðSÞ < min c ε series ðcÞ: To arrive at (A10), we stack (A8) and obtain a linear system for the coefficients c 1 ; …; c d as With Lemma 1, we obtain that (A9) holds with equality if and only if The linear system (A12) is overdetermined since it has N equations but only d unknowns c 1 ; …; c d . More specifically, the linear system (A12) can be solved for the coefficients c 1 ; …; c d only if the N Â 1 vector is element of the image (or range) of the N Â d matrix Thus, (A9) holds with equality only if the vector (A13) is in the image of (A14). Since the image of the matrix (A14) is a subspace of R N with dimension not greater than d, this is a highly restrictive condition. In particular, if we consider that the N Â N matrix W follows some random distribution, then the vector (A13) is not in the image of the matrix (A14) with probability 1 for the vast majority of distributions of the matrix W.
D. Processing pipeline fMRI data for dataset 4 The HCP 100 unrelated subjects resting state fMRI dataset (REST1) was included for this project, which was already pre-processed using the optimized HCP minimal processing pipeline before downloading, including normalization, motion correction and intensity normalization . Subsequently, the data were motion-corrected again using ICA-AROMA (v0.4-beta 2017, Nijmegen, the Netherlands), which identifies which ICA-based components are strongly correlated with already available motion parameters, and removed these components from the data. All subsequent processing was performed using FSL 5.09 (https://fsl.fmrib.ox.ac.uk/fsl/FMRIB, Oxford, UK). 3D T1-weighted data from the dataset was processed using SIENAX, to create grey matter (GM) and white matter (WM) as well as cerebrospinal fluid (CSF) masks, which were registered to the functional images using inverted boundary-based registration (BBR) parameters. WM and CSF masks were used to regress out average signals within these masks on the ICA-AROMA processed data. The automated anatomical labelling atlas (AAL) was registered to T1-weighted images using inverted FNIRT parameters (Tzourio-Mazoyer et al., 2002), after which SIENAX-derived GM masks were used to mask the cortical atlas in T1-space. Subsequently, this modified atlas was transferred to the fMRI images, again using inverted BBR parameters. Finally, mean time series were calculated for each region within the atlas.