The Analysis of Face Perception MEG and EEG Data Using a Potts-Mixture Spatiotemporal Joint Model
In this paper we analyze magnetoencephalography (MEG) and electroencephalography (EEG) data from a single subject with the objective of determining the location and dynamics of brain activity when this subject is repeatedly presented with stimuli corresponding to pictures of scrambled faces and required to make a symmetry judgement. To meet this objective we must consider the ill-posed inverse problem arising when MEG and EEG are used to measure electromagnetic brain activity over an array of sensors at the scalp and the goal is to map these data back to the sources of neural activity within the brain. A novel challenge arising with the current dataset is that the study involves combined MEG and EEG data, and it is of interest to combine these two different modalities together. We propose a new Bayesian finite mixture state-space model that builds on previously developed models and incorporates two major extensions that are required for our application: (i) We combine EEG and MEG data together and formulate a joint model for dealing with the two modalities simultaneously; (ii) we incorporate the Potts model to represent the spatial dependence in an allocation process that partitions the cortical surface into a small number of latent states termed mesostates. We formulate the new spatiotemporal model and derive an efficient procedure for simultaneous point estimation and model selection based on the iterated conditional modes algorithm combined with local polynomial smoothing. The proposed method results in a novel estimator for the number of mixture components and is able to select active brain regions which correspond to active variables in a high-dimensional dynamic linear model. The methodology is investigated using synthetic data and then applied to our motivating application to examine the neural response to the perception of scrambled faces.
READ FULL TEXT