Ill-posed inverse problems
A fundamental principle is that, whereas the forward problem has a unique solution in classical physics (as dictated by the causality principle), the inverse problem might accept multiple solutions, which are models that equivalently predict the observations.
In MEG and EEG, the situation is critical: It has been demonstrated theoretically by von Helmholz back in the 19th century that the general inverse problem that consists in finding the sources of electromagnetic fields outside a volume conductor has an infinite number of solutions. This issue of non-uniqueness is not specific to MEG/EEG: geophysicists for instance are also confronted to non-uniqueness in trying to determine the distribution of mass inside a planet by measuring its external gravity field the globe. Hence theoretically, an infinite number of source models equivalently fits any MEG and EEG observations, which would make them poor techniques for scientific investigations. Fortunately, this question has been addressed with the mathematics of ill-posedness and inverse modeling, which formalize the necessity of bringing additional contextual information to complement a basic theoretical model.
Hence the inverse problem is a true modeling problem. This has both philosophical and technical impacts on approaching the general theory and the practice of inverse problems (Tarantola, 2004). For instance, it will be important to obtain measures of uncertainty on the estimated values of the model parameters. Indeed, we want to avoid situations where a large set of values for some of the parameters produce models that equivalently account for the experimental observations. If such situation arises, it is important to be able to question the quality of the experimental data and maybe, falsify the theoretical model.
The non-uniqueness of the solution is a situation where an inverse problem is said to be ill-posed. In the reciprocal situation where there is no value for the system’s parameters to account for the observations, the data are said to be inconsistent (with the model). Another critical situation of ill-posedness is when the model parameters do not depend continuously on the data. This means that even tiny changes on the observations (e.g., by adding a small amount of noise) trigger major variations in the estimated values of the model parameters. This is critical to any experimental situations, and in MEG/EEG in particular, where estimated brain source amplitudes are sought not to ‘jump’ dramatically from millisecond to millisecond.
The epistemology and early mathematics of ill-posedness have been paved by Jacques Hadamard in (Hadamard, 1902), where he somehow radically stated that problems that are not uniquely solvable are of no interest whatsoever. This statement is obviously unfair to important questions in science such as gravitometry, the backwards heat equation and surely MEG/EEG source modeling.
The modern view on the mathematical treatment of ill-posed problems has been initiated in the 1960’s by Andrei N. Tikhonov and the introduction of the concept of regularization, which spectacularly formalized a Solution of ill-posed problems (Tikhonov & Arsenin, 1977). Tikhonov suggested that some mathematical manipulations on the expression of ill-posed problems could make them turn well-posed in the sense that a solution would exist and possibly be unique. More recently, this approach found a more general and intuitive framework using the theory of probability, which naturally refers to the uncertainty and contextual a priori inherent to experimental sciences (see e.g., (Tarantola, 2004)).
As of 2010, more than 2000 journal articles referred in the U.S. National Library of Medicine publication database to the query ‘(MEG OR EEG) AND source’. This abundant literature may be considered ironically as only a small sample of the infinite number of solutions to the problem, but it is rather a reflection of the many different ways MEG/EEG source modeling can be addressed by considering additional information of various nature.
Such a large amount of reports on a single, technical issue has certainly been detrimental to the visibility and credibility of MEG/EEG as a brain mapping technique within the larger functional brain mapping audience, where the fMRI inverse problem is reduced to the well-posed estimation of the BOLD signal (though it is subject to major detection issues).
Today, it seems that a reasonable degree of technical maturity has been reached by electromagnetic brain imaging using MEG and/or EEG. All methods reduce to only a handful of classes of approaches, which are now well-identified. Methodological research in MEG/EEG source modeling is now moving from the development of inverse estimation techniques, to statistical appraisal and the identification of functional connectivity. In these respects, it is now joining the concerns shared by other functional brain imaging communities (Salmelin & Baillet, 2009).