# MEG (Magnetoencephalography) Program

- www.mcw.edu >
- Magnetoencephalography (MEG) >
- About MEG >
- (5) Electromagnetic neural source imaging >
- (6) MEG/EEG source estimation and imaging >
- (5) Appraisal of MEG/EEG source models >
- (2) Statistical inference

### Statistical inference

Questions like: ‘How different is the dipole location between these two experimental conditions? ’ and ‘Are source amplitudes larger in such condition that in a control condition? ’ belong to statistical inference from experimental data. The basic problem of interest here is hypothesis testing, which is supposed to potentially invalidate a model under investigation. Here, the model must be understood at a higher hierarchical level than when talking about e.g., an MEG/EEG source model. It is supposed to address the neuroscience question that has motivated data acquisition and the experimental design (Guilford, P., & Fruchter, B., 1978).

In the context of MEG/EEG, the population samples that will support the inference are either trials or subjects, for hypothesis testing at the individual and group levels, respectively.

As in the case of the estimation of confidence intervals, both parametric and non-parametric approaches to statistical inference can be considered. There is no space here for a comprehensive review of tools based on parametric models. They have been and still are extensively studied in the fMRI and PET communities – and recently adapted to EEG and MEG (Kiebel, Tallon-Baudry, & Friston, 2005) – and popularized with software toolboxes such as SPM (K. Friston, Ashburner, Kiebel, Nichols, & Penny, 2007).

Non-parametric approaches such as permutation tests have emerged for statistical inference applied to neuroimaging data (Nichols & Holmes, 2002, Pantazis, Nichols, Baillet, & Leahy, 2005). Rather than applying transformations to the data to secure the assumption of normally-distributed measures, non-parametric statistical tests take the data as they are and are robust to departures from normal distributions.

In brief, hypothesis testing forms an assumption about the data that the researcher is interested about questioning. This basic hypothesis is called the null hypothesis, H0, and is traditionally formulated to translate no significant finding in the data e.g., ‘There are no differences in the MEG/EEG source model between two experimental conditions’. The statistical test will express the significance of this hypothesis and evaluate the probability that the statistics in question would be obtained just by chance. In other words, the data from both conditions are interchangeable under the H0 hypothesis. This is literally what permutation testing does. It computes the sample distribution of estimated parameters under the null hypothesis and verifies whether a statistics of the original parameter estimates was likely to be generated under this law.

We shall now review rapidly the principles of multiple hypotheses testing from the same sample of measurements, which induces errors when multiple parameters are being tested at once. This issue pertains to statistical inference both at the individual and group levels. Samples therefore consist of repetitions (trials) of the same experiment in the same subject, or of the results from the same experiment within a set of subjects, respectively. This distinction is not crucial at this point. We shall however point at the issue of spatial normalization of the brain across subjects either by applying normalization procedures (Ashburner & Friston, 1997) or by the definition of a generic coordinate system onto the cortical surface (Fischl, Sereno, & Dale, 1999, Mangin et al.., 2004).

The outcome of a test will evaluate the probability p that the statistics computed from the data samples be issued from complete chance as expressed by the null hypothesis. The investigator needs to fix a threshold on p a priori, above which H0 cannot be rejected, thereby corroborating H0. Tests are designed to be computed once from the data sample so that the error – called the type I error – consisting in accepting H0 while it is invalid stays below the predefined p-value.

If the same data sample is used several times for several tests, we multiply the chances that we commit a type I error. This is particularly critical when running tests on sensor or source amplitudes of an imaging model as the number of tests is on the order of 100 and even 10,000, respectively. In this latter case, a 5% error over 10,000 tests is likely to generate 500 occurrences of false positives by wrongly rejecting H0. This is obviously not desirable and this is the reason why this so-called family-wise error rate (FWER) should be kept under control.

Parametric approaches to address this issue have been elaborated using the theory of random fields and have gained tremendous popularity through the SPM software (K. Friston et al.., 2007). These techniques have been extended to electromagnetic source imaging but are less robust to departure from normality than non-parametric solutions. The FWER in non parametric testing can be controlled by using e.g., the statistics of the maximum over the entire source image or topography at the sensor level (Pantazis et al.., 2005).

The emergence of statistical inference solutions adapted to MEG/EEG has brought electromagnetic source localization and imaging to a considerable degree of maturity that is quite comparable to other neuroimaging techniques . Most software solutions now integrate sound solutions to statistical inference for MEG and EEG data, and this is a field that is still growing rapidly.

MEG functional connectivity and statistical inference at the group level illustrated: Jerbi et al. (2007) have revealed a cortical functional network involved in hand movement coordination at low frequency (4Hz). The statistical group inference first consisted on fitting for each trial in the experiment, a distributed source model constrained to the individual anatomy of each of the 14 subjects involved. The brain area with maximum coherent activation with instantaneous hand speed was identified within the contralateral sensorimotor area (white dot). The traces at the top illustrate excellent coherence in the [3,5]Hz range between these measurements (hand speed in green and M1 motor activity in blue). Secondly, the search for brain areas with activity in significant coherence with M1 revealed a larger distributed network of regions. All subjects were coregistered to a brain surface template in Talairach normalized space with the corresponding activations interpolated onto the template surface. A non-parametric t-test contrast was completed using permutations between rest and task conditions (p<0.01).

Copyright 2010 Sylvain Baillet, PhD