Article ID Journal Published Year Pages File Type
403283 Knowledge-Based Systems 2006 8 Pages PDF
Abstract

Objects of interest are represented in the brain simultaneously in different frames of reference. Knowing the positions of one’s head and eyes, for example, one can compute the body-centred position of an object from its perceived coordinates on the retinae. We propose a simple and fully trained attractor network which computes head-centred coordinates given eye position and a perceived retinal object position. We demonstrate this system on artificial data and then apply it within a fully neurally implemented control system which visually guides a simulated robot to a table for grasping an object. The integrated system has as input a primitive visual system with a what–where pathway which localises the target object in the visual field. The coordinate transform network considers the visually perceived object position and the camera pan-tilt angle and computes the target position in a body-centred frame of reference. This position is used by a reinforcement-trained network to dock a simulated PeopleBot robot at a table for reaching the object. Hence, neurally computing coordinate transformations by an attractor network has biological relevance and technical use for this important class of computations.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,