Article ID Journal Published Year Pages File Type
4328916 Brain Research 2008 14 Pages PDF
Abstract

Humans rely on the integration of information from multiple sensory modalities to interact successfully with their environment. In the present series of studies, we investigated how the visuomotor system integrates congruent and incongruent visual and tactile sensory inputs for goal-directed action comprehension and execution. Specifically, we investigated whether orienting of attention towards vision, touch, or both vision and touch enhances the impact of one modality over another. In Experiment 1 participants were presented with visual (on a computer monitor) and/or tactile (in unseen left hand) sensory inputs of an action, and made button-press responses to categorise it as ‘wide’ or ‘narrow’. Responses were significantly faster when attending to vision compared with touch, and faster in fully congruent compared with grasp congruent and incongruent conditions. Thus, both vision and touch are significant in action comprehension, but visual inputs are in general more influential than tactile inputs. Moreover, responses to wide grasp actions were significantly faster than to narrow-grasp actions. In Experiment 2 the same task was performed but the participants made reach-to-grasp movements, recorded with ProReflex motion capture system. Although actions to wide objects produced wider peak grasp overall than to a narrow object, in contrast to action comprehension, there was no systematic effect of attended modality or tactile input in action execution. We speculate that action comprehension and execution utilize visual and tactile inputs differentially.

Related Topics
Life Sciences Neuroscience Neuroscience (General)
Authors
, ,