The integration of signals from electro-encephalography (EEG) and functional magnetic resonance imaging (fMRI), acquired simultaneously from the same observer, holds great potential for the elucidation of the neurobiological underpinnings of human brain function. However, the most appropriate way in which to combine the data in order to achieve this goal is not clear. In this thesis, a symmetric and data-driven route to the integration of multimodal functional brain imaging data based on information theory is proposed. As a proof of principle, the framework, which was originally developed in the study of neuronal population codes, is applied in the experimental context of visually evoked responses and the neural underpinnings of visual perceptual decisions. The implications, benefits, and limitations of this theoretical framework for the analysis of simultaneously acquired EEG and fMRI data are discussed.