Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
7286323 | Cognition | 2016 | 10 Pages |
Abstract
Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing.
Keywords
Related Topics
Life Sciences
Neuroscience
Cognitive Neuroscience
Authors
Rachel A. Ryskin, Ranxiao Frances Wang, Sarah Brown-Schmidt,