Article ID Journal Published Year Pages File Type
5042550 Journal of Memory and Language 2017 28 Pages PDF
Abstract

•Implements alternative models of multimodal interaction during language processing.•Tests sublexical interactive model MIM and lexical interaction model TRACE+.•Models differ on influence of phonological rhyme on visual world behaviour.•Effects of rhyme on participants gaze eliminated by visual and semantic competition.•Only model allowing sub-lexical multimodal interaction replicates novel data.

Ambiguity in natural language is ubiquitous, yet spoken communication is effective due to integration of information carried in the speech signal with information available in the surrounding multimodal landscape. Language mediated visual attention requires visual and linguistic information integration and has thus been used to examine properties of the architecture supporting multimodal processing during spoken language comprehension. In this paper we test predictions generated by alternative models of this multimodal system. A model (TRACE) in which multimodal information is combined at the point of the lexical representations of words generated predictions of a stronger effect of phonological rhyme relative to semantic and visual information on gaze behaviour, whereas a model in which sub-lexical information can interact across modalities (MIM) predicted a greater influence of visual and semantic information, compared to phonological rhyme. Two visual world experiments designed to test these predictions offer support for sub-lexical multimodal interaction during online language processing.

Related Topics
Life Sciences Neuroscience Cognitive Neuroscience
Authors
, , ,