Language is multimodal in nature, and multimodal cues like iconic gestures contain semantic information that can be useful for language comprehension. In fact, prior corpus analyses have shown that iconic gestures tend to precede their lexical affiliates (e.g., verbs), suggesting that semantic information can be available from iconic gestures before speech. Here, we investigate whether iconic gestures depicting actions (e.g., playing piano) would contribute to linguistic prediction in Mandarin Chinese where nouns follow verbs. We extend the classic visual world paradigm to include a speaker in the middle of screen producing an iconic gesture with gestural stroke starting 700ms prior to the relevant verb onset. In each trial, participants either listened to a sentence with (gesture and speech) or without (speech only) a gesture or a sentence where the critical verb is replaced by a disfluency sound ‘umm’ with a gesture (gesture only). Preliminary results show more eye-gaze to the target picture related to the noun in the sentence (e.g., piano) prior to the verb onset (e.g., play) when iconic gestures are present than are absent, suggesting the semantic information in iconic gestures could benefit linguistic prediction prior to and even in the absence of speech.
Chen, Xuanyi ; Hu, Junfei ; Huettig, Falk ; Özyürek, Asli ; et. al. The effect of iconic gestures on linguistic prediction in Mandarin Chinese: a visual world paradigm study.The 29th AMLaP conference, Architectures and Mechanisms for Language Processing (AMLaP 2023) (San Sebastian, Spain, du 31/08/2023 au 02/09/2023).