Scanning techniques: Spatial filters, beamformers and signal classifiers
The inherent difficulties to source localization with multiple generators and noisy data have led signal processors to develop alternative approaches, most notably in the glorious field of radar and sonar in the 1970’s. Rather than attempting to identify discrete sets of sources by adjusting their non-linear location parameters, scanning techniques have emerged and proceeded by systematically sifting through the brain space to evaluate how a predetermined elementary source model would fit the data at every voxel of the brain volume. For this local model evaluation to be specific of the brain location being scanned, contributions from possible sources located elsewhere in the brain volume need to be blocked. Hence, these techniques are known as spatial-filters and beamformers (the simile is a virtual beam being directed and ‘listening’ exclusively at some brain region).
These techniques have triggered tremendous interest and applications in array signal processing and have percolated the MEG/EEG community at several instances (e.g., (Spencer, Leahy, Mosher, & Lewis, 1992) and more recently, (Hillebrand, Singh, Holliday, Furlong, & Barnes, 2005)). At each point of the brain grid, a narrow-band spatial filter is formed and evaluates the contribution to data from an elementary source model – such as a single or a triplet of current dipoles – while contributions from other brain regions are ideally muted, or at least attenuated. (Veen & Buckley, 1988) is a technical introduction to beamformers and excellent further reading.
It is sometimes claimed that beamformers do not solve an inverse problem: this is a bit overstated. Indeed, spatial filters do require a source and a forward model that will be both confronted to the observations. Beamformers scan the entire expected source space and systematically test the prediction of the source and forward models with respect to observations. These predictions compose a distributed score map, which should not be misinterpreted as a current density map. More technically – though no details are given here – the forward model needs to be inverted by the beamformer as well. It only proceeds iteratively by sifting through each source grid point and estimating the output of the corresponding spatial filter. Hence beamformers and spatial filters are truly avatars of inverse modeling.
Beamforming is therefore a convenient method to translate the source localization problem into a signal detection issue. As every method that tackles a complex estimation problem, there are drawbacks to the technique:
- Beamformers depend on the covariance statistics of the noise in the data. These latter may be estimated from the data through sample statistics. However, the number of independent data samples that are necessary for the robust – and numerically stable – estimation of covariance statistics is proportional to the square of the number of data channels, i.e. of sensors. Hence beamformers ideally require long, stationary episodes of data, such as sweeps of ongoing, unaveraged data and experimental conditions where behavioral stationarity ensures some form of statistical stationarity in the data (e.g., ongoing movements). (Cheyne, Bakhtazad, & Gaetz, 2006) have suggested that event-related brain responses can be well captured by beamformers using sample statistics estimated across single trials.
- They are more sensitive to errors in the head model. The filter outputs are typically equivalent to local estimates of SNR. However this latter is not homogeneously distributed everywhere in the brain volume: MEG/EEG signals from activity in deeper brain regions or gyral generators in MEG have weaker SNR than in the rest of the brain. The consequence is side lobe leakages from interfering sources nearby, which impede filter selectivity and therefore, the specificity of source detection (Wax & Anu, 1996);
- Beamformers may be fooled by simultaneous activations occurring in brain regions outside the filter pass-band that are highly correlated with source signals within the pass-band. External sources are interpreted as interferences by the beamformer, which blocks the signals of interest because they bear the same sample statistics than the interference.
Signal processors had long identified these issues and consequently developed multiple signal classification (MUSIC) as an alternative technique ((Schmidt, 1986)). MUSIC assumes that signal and noise components in the data are uncorrelated. Strong theoretical results in information theory show that these components live in separate, high-dimensional data subspaces, which can be identified using e.g., a PCA of the data time series (Golub, 1996). (J. C. Mosher, Baillet, & Leahy, 1999) is an extensive review of signal classification approaches to MEG and EEG source localization.
However, the practical aspects of MUSIC and its variations remain limited by their sensitivity in the accurate definition of the respective signal and noise subspaces. These techniques may be fooled by background brain activity, which signals share similar properties with the event-related responses of interest. An interesting side application of MUSIC-like powerful discrimination ability though has been developed in epilepsy spike-sorting (Ossadtchi et al.., 2004).
In summary, spatial-filters, beamformers and signal classification approaches bring us closer to a distributed representation of the brain electrical activity. As a caveat, the results generated by these techniques are not an estimation of the current density everywhere in the brain. They represent a score map of a source model – generally a current dipole – that is evaluated at the points of a predefined spatial lattice, which sometimes leads to misinterpretations. The localization issue now becomes a signal detection problem within the score map (J. Mosher, Baillet, & Leahy, 2003). The imaging approaches we are about to introduce now, push this detection problem further by estimating the brain current density globally.