Raskopf, Charlotte
[UCL]
Collignon, Olivier
[UCL]
Falagiarda, Federica
[UCL]
Theoretical part: Human beings constantly experience social interactions in their daily life. In order to interact properly, several non-verbal signals are integrated,like facial or vocal information. Previous studies have already shown that brain areas selectively respond to these signals. Nevertheless, the integration of these facial and vocal information is less well understood. Through behavioral and fMRI experiments, we aimed to develop this understanding. Research questions: the primary research questions of the fMRI experiment were: (a) does the integration of audiovisual information occurs in unimodal- selective regions, and (b) what is the nature of the multimodal representation of emotion in higher-level cortices? Before this experiment, two pilot experiments have been conducted (a) to select the most relevant stimuli for the fMRI experiment, and (b) to investigate sex-related differences in emotion perception. Methods: the first two experiments followed similar methods. For each experiment, twenty participants had to assess which emotions they perceived (measure: accuracy). For each actress/actor, each emotion displayed has the auditory and the visual modality presented separately. Results: The two best actors and the two best actresses were selected, and the main effects of modality and observed’s sex on emotion perception have been found. No interaction was found. Discussion: Most of our findings were consistent with the previous literature. Some strengths and limitations have been discussed for each experiment. Finally, research and clinical implications of our findings have been exposed.
Bibliographic reference |
Raskopf, Charlotte. Face and voice processing in a context of emotion perception. Faculté de psychologie et des sciences de l'éducation, Université catholique de Louvain, 2019. Prom. : Collignon, Olivier ; Falagiarda, Federica. |
Permanent URL |
http://hdl.handle.net/2078.1/thesis:22064 |