Elsevier

NeuroImage

Volume 25, Issue 2, 1 April 2005, Pages 383-394
NeuroImage

A comparison of random field theory and permutation methods for the statistical analysis of MEG data

https://doi.org/10.1016/j.neuroimage.2004.09.040Get rights and content

Abstract

We describe the use of random field and permutation methods to detect activation in cortically constrained maps of current density computed from MEG data. The methods are applicable to any inverse imaging method that maps event-related MEG to a coregistered cortical surface. These approaches also extend directly to images computed from event-related EEG data. We determine statistical thresholds that control the familywise error rate (FWER) across space or across both space and time. Both random field and permutation methods use the distribution of the maximum statistic under the null hypothesis to find FWER thresholds. The former methods make assumptions on the distribution and smoothness of the data and use approximate analytical solutions, the latter resample the data and rely on empirical distributions. Both methods account for spatial and temporal correlation in the cortical maps. Unlike previous nonparametric work in neuroimaging, we address the problem of nonuniform specificity that can arise without a Gaussianity assumption. We compare and evaluate the methods on simulated data and experimental data from a somatosensory-evoked response study. We find that the random field methods are conservative with or without smoothing, though with a 5 vertex FWHM smoothness, they are close to exact. Our permutation methods demonstrated exact specificity in simulation studies. In real data, the permutation method was not as sensitive as the RF method, although this could be due to violations of the random field theory assumptions.

Introduction

Magnetoencephalography (MEG) is used to image electrical activity in the brain. Clusters of thousands of synchronously activated pyramidal cortical neurons are believed to be the main generators of MEG signals. In particular, the currents associated with their large dendritic trunks, which are locally oriented in parallel and perpendicular to the cortical surface, are the primary source of the neuromagnetic fields outside the head (Hämäläinen et al., 1993). Imaging approaches to the MEG inverse problem exploit this concept by restricting the reconstruction to elemental sources (dipoles) oriented normally to the cortical surface (Dale and Serano, 1993). Consequently, a commonly used approach extracts a tessellated representation of the cerebral cortex from a coregistered MR image and solves the inverse problem for a current dipole located at each vertex of the tessellated surface. Since the position and orientation of the dipoles is fixed, image reconstruction is a linear problem and can be solved using standard techniques (Baillet et al., 2001, Hämäläinen et al., 1993, Katila and Karp, 1983, Phillips et al., 1997). However, the highly convoluted nature of the human cortex requires the use of many thousands of dipoles for an accurate representation of the cortical surface. The inverse problem becomes hugely underdetermined and the resulting current density maps (CDMs) are of low resolution; interpretation is further confounded by the presence of additive noise exhibiting a highly nonuniform spatial correlation.

As with fMRI images, objective assessment of CDMs requires a principled approach to identifying regions of activation. The analysis of CDMs involves testing thousands of hypothesis (one per surface element) for statistically significant experimental effects. This raises the possibility of large numbers of false-positives simply as a result of multiple hypothesis testing. To effectively control the number of false-positives over all tests, we must therefore consider the multiple hypothesis-testing problem. Many false-positive measures have been proposed in this context, including familywise error rate, expected false discovery rate, per-comparison error rate and per-family error rate (Nichols and Hayasaka, 2003). The standard approach, and the one investigated in this paper, is to control the Familywise Error Rate (FWER), i.e., the chance of one or more false-positives under the null hypothesis.

The simplest approach to controlling the FWER is the Bonferroni correction method (Hochberg and Tamhane, 1987, Nichols and Hayasaka, 2003). This method produces conservative thresholds unless the tests are independent, a case that is rarely true in neuroimaging experiments and certainly not for the smooth images reconstructed from MEG data. Other methods that consider the spatial dependence of the data make inferences based on the global maximum distribution. The FWER is directly related to the maximum statistic; one or more voxels Ti will exceed the threshold u under the null hypothesis H0 only if the maximum exceeds the threshold:P(FWER)=P(iTi>u|Ho)=P(maxiTi>u|Ho)=1FmaxT|Ho(u)=1(1α)=α

Therefore, we can control the FWER at level α, if we choose the threshold u to be in the (1 − α) 100th percentile of the maximum distribution.

Random Field (RF) theory methods approximate the upper tail of the maximum distribution Fmax using the expected value of the Euler characteristic of the thresholded image (Worsley et al., 1996). They are implemented in various software packages (SPM-http://www.fil.ion.ucl.ac.uk, VoxBo-http://www.voxbo.org, and FSL-http://www.fmrib.ox.ac.uk/fsl among others), and are typically used in PET and fMRI studies. However, RF theory relies on several assumptions including the following: the image has the same parametric distribution at each spatial location, the point spread function has two derivatives at the origin, sufficient smoothness to justify application of continuous RF theory, and a sufficiently high threshold for the asymptotic results to be accurate.

Resampling methods are a different approach to controlling the FWER that exploit the information contained in the data to estimate the empirical distribution of the maximum statistic. They do not assume parametric distributions, they adapt to underlying correlation patterns in the data, and are now computationally feasible. The two main categories are bootstrap-based, which allow for a general modeling framework, and permutation-based, which require some knowledge of exchangeability conditions under the null hypothesis. Here, we only consider permutation methods since they are exact, that is, they give precise control of the FWER, while bootstrap methods are only asymptotically exact. Furthermore, the permutation approach relies on a less restrictive exchangeability condition than the requirement in the bootstrap that samples are independent and identically distributed.

Permutation and RF theory methods have been applied widely in functional (Andrade et al., 2000, Nichols and Holmes, 2001, Worsley et al., 1992, Worsley et al., 1996) and structural (Bullmore et al., 1999, Chung, 2001, Pantazis et al., 2004, Sowell et al., 1999a, Sowell et al., 1999b, Thompson et al., 2001, Thompson et al., 2003) brain imaging. However, until recently, error rate control in MEG experiments has drawn little attention. Dale et al. (2000) normalized the CDMs using an estimate of the background noise variance at each cortical element. These normalized images follow a t distribution under the null hypothesis of Gaussian background noise. Thresholding of the resulting statistical maps was then used to detect significant activation, however, the multiple comparisons problem was not addressed. Barnes and Hillbrand (2003) presented an application of RF theory to MEG data but their method is specifically tailored to beamforming solutions rather that the general linear inverse methods. Carbonell et al. (2004) used Hotelling's T2 random fields to localize significant MEG/EEG activation in time, and then t statistics to achieve spatial localization. Permutation tests were applied by Blair and Karnisky (1994) for the analysis of EEG data as recorded on an array of electrodes, and by Pantazis et al. (2003) for the analysis of MEG data in reconstructed cortical maps of brain activation. An alternative permutation scheme, proposed by Singh et al. (2003), detects event-related synchronization or desynchronization components in an MEG study involving visual stimulation and a Linearly Constraint Minimum Variance (LCMV) beamformer applied to data decomposed into multiple frequency bands. The current work presents a novel general RF theory-based method to control FWER in MEG. Also, it extends the results in Pantazis et al. (2003) to extract thresholds for each time-point. Finally, we compare RF theory and permutation methods in terms of specificity, sensitivity, and possible limitations.

Section snippets

Methods

Our goal is to detect spatial and temporal regions of significant activity in MEG-based cortical maps while controlling familywise error rate. The methods to do this that we describe below also apply directly to cortical maps computed from EEG data, since the inverse imaging methods differ only in the forms of their lead field matrices (Baillet et al., 2001). In this section, we first describe our MEG data model. We then present two methods, the first based on RF theory and the second on

Results

In this section, we evaluate the random field method and the three permutation methods given in Table 1 in terms of specificity, i.e., the ability of the methods to control false-positives, and sensitivity, a measure of how well the method can detect and localize true brain activation.

Conclusion

We have presented RF theory and permutation methods for processing of MEG data and extracting significant activation maps. They can be used with any linear or nonlinear cortical imaging method to obtain objective thresholds on statistic maps. The random field method demonstrated valid but conservative performance in our null simulation experiments; the observed FWER was 0.034 (vs. 0.05 nominal) for smoothed data, and worse for unsmoothed data. However, the method successfully identified the two

Acknowledgments

This work was supported in part by grants from NIBIB (R01 EB002010) and NCRR (P41 RR013642) and in part by the Human Brain Project/Neuroinformatics research program funded by NIMH, NIA, and NIBIB.

References (41)

  • S. Baillet et al.

    Electromagnetic brain mapping

    IEEE Signal Process. Mag.

    (2001)
  • G.R. Barnes et al.

    Statistical flattening of MEG beamformer images

    Hum. Brain Mapp.

    (2003)
  • R.C. Blair et al.

    Distribution-free statistical analyses of surface and volumetric maps

  • E.T. Bullmore et al.

    Global, voxel, and cluster tests, by theory and permutation, 33 for a difference between two groups of structural MR images of the brain

    IEEE Trans. Med. Imag.

    (1999)
  • Chung, M.K., 2001. Statistical morphometry in neuroanatomy. PhD thesis, McGill University,...
  • A.M. Dale et al.

    Improved localization of cortical activity by combining EEG and MEG with MRI cortical surface reconstruction: a linear approach

    Cogn. Neurosci.

    (1993)
  • S. Greenhouse et al.

    On methods in the analysis of profile data

    Psychometrika

    (1959)
  • M. Hämäläinen et al.

    Magnetoencephalography. theory, instrumentation and applications to the noninvasive study of human brain function

    Rev. Mod. Phys.

    (1993)
  • Cited by (181)

    View all citing articles on Scopus
    View full text