Klinische Neurophysiologie 2012; 43 - P060
DOI: 10.1055/s-0032-1301610

Modeling crossmodal interactions in emotional audiovisual integration

V Müller 1, EC Cieslik 1, BI Turetsky 2, SB Eickhoff 1
  • 1Institut für Neurowissenschaften und Medizin (INM-2), Forschungszentrum Jülich, Jülich
  • 2Neuropsychiatry Division, Department of Psychiatry, University of Pennsylvania School of Medicine, Philadelphia, USA

Introduction: Emotion in daily life is often expressed in a multimodal fashion. Consequently emotional information from one modality can influence processing in another. Although there exists a wide range of literature concerning the areas which are active during emotion processing, little is known about the mechanisms and neural interactions underlying bottom-up and top-down processes in crossmodal emotional integration. In a previous fMRI study, assessing the neural correlates of audio-visual integration, we found that activity in the left amygdala is significantly attenuated when a neutral stimulus is paired with an emotional one compared to conditions where emotional stimuli were present in both channels. Methods: Here we used dynamic causal modelling to investigate the underlying networks of this emotion presence congruence effect. All 48 models included bilateral fusiform (FFG) and superior temporal gyrus (STG), bilateral posterior superior temporal sulcus (pSTS) and left amygdala and assumed FFG and STG projecting into ipsilateral pSTS. The models differed in a) the pre- and absence of reciprocal interhemispheric connections between FFG, STG and pSTS b) the region projecting into the left amygdala and c) modulation of the effective connectivity towards the left amygdala. Results: Our results provide evidence in favor of a model family, differing only in the interhemispheric connections. All winning models shared a connection between bilateral FFG into the left amygdala and a modulatory influence of bilateral pSTS on these connections. Moreover a lateralization of the right FFA by stronger stimulus-driven (faces) input in this region could be found, whereas such lateralization was not present for sound-driven input into the STG. Conclusion: In summary, our data provides further evidence for a rightward lateralization of the FFG for face stimuli and in particular for a key role of the pSTS in the integration and gating of audio-visual emotional information.