We have discussed how fitting dipoles to a data time segment may be quite sensitive to initial conditions and therefore, subjective. Similarly, imaging source models suggest that each brain location is active, potentially. It is therefore important to evaluate the confidence one might acknowledge to a given model. In other words, we are now looking for error bars that would define a confidence interval about the estimated values of a source model.
Signal processors have developed a principled approach to what they have coined as ‘detection and estimation theories’ (Kay, 1993). The main objective consists in understanding how certain one can be about the estimated parameters of a model, given a model for the noise in the data. The basic approach consists in considering the estimated parameters (e.g., source locations) as distributed through random variables. Parametric estimation of error bounds on the source parameters consists in estimating their bias and variance.
Bias is an estimation of the distance between the true value and the expectancy of estimated parameter values due to perturbations. The definition of variance follows immediately. Cramer-Rao lower bounds (CRLB) on the estimator’s variance can be explicitly computed using an analytical solution to the forward model and given a model for perturbations (e.g., with distribution under a normal law). In a nutshell, the tighter the CRLB, the more confident one can be about the estimated values. (J. C. Mosher, Spencer, Leahy, & Lewis, 1993) have investigated this approach using extensive Monte-Carlo simulations, which evidenced a resolution of a few millimeters for single dipole models. These results were later confirmed by phantom studies (Leahy, Mosher, Spencer, Huang, & Lewine, 1998, Baillet, Riera, et al.., 2001). CRLB increased markedly for two-dipole models, thereby demonstrating their extreme sensitivity and instability.
Recently, non-parametric approaches to the determination of error bounds have greatly benefited from the commensurable increase in computational power. Jackknife and bootstrap techniques proved to be efficient and powerful tools to estimate confidence intervals on MEG/EEG source parameters, regardless of the nature of perturbations and of the source model.
These techniques are all based on data resampling approaches and have proven to be exact and efficient when a large-enough number of experimental replications are available (Davison & Hinkley, 1997). This is typically the case in MEG/EEG experiments where protocols are designed on multiple trials. If we are interested e.g., in knowing about the confidence interval on a source location in a single-dipole model from evoked averaged data, the bootstrap will generate a large number (typically >500) of surrogate average datasets, by randomly choosing trials from the original set of trials and averaging them all together. Because the trial selection is random and from the complete set of trials, the corresponding sample distribution of the estimated parameter values is proven to converge toward the true distribution. A pragmatic approach to the definition of a confidence interval thereby consists in identifying the interval containing e.g., 95% of the resampled estimates ((Baryshnikov, Veen, & Wakai, 2004, Darvas et al.., 2005, McIntosh & Lobaugh, 2004)).
The bootstrap procedure yields non parametric estimates of confidence intervals on source parameters. This is illustrated here with data from a study of the somatotopic cortical representation of hand fingers. Ellipsoids represent the resulting 95% confidence intervals on the location of the ECD, as a model of the 40 ms (a) and 200 ms (b) brain response following hand finger stimulation. Ellipsoid gray levels encode for the stimulated fingers. While in (a) the respective confidence ellipsoids do not overlap between fingers, they considerably increase in volume for the secondary responses in (b), thereby demonstrating that a single ECD is not a proper model of brain currents at this later latency. Note similar evaluations may be drawn from imaging models using the same resampling methodology.
These considerations naturally lead us to statistical inference, which questions hypothesis testing.