Article ID Journal Published Year Pages File Type
411110 Neurocomputing 2009 8 Pages PDF
Abstract

Being able to estimate pose and location of nearby objects is a fundamental skill for any natural or artificial agent actively interacting with its environment. The methods for extraction and integration of visual cues employed in artificial systems are usually very different from the solutions found in nature. We present a biologically plausible model of distance and orientation estimation based on neuroscience findings that is suitable to be implemented in a robotic vision-based grasping setup. Key novelties of the model are the use of simple retinal and proprioceptive data, and the integration between stereoptic and perspective cues.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,