Article ID Journal Published Year Pages File Type
535733 Pattern Recognition Letters 2006 10 Pages PDF
Abstract

This paper shows that sensory–motor coordination contributes to the performance of situated models on the high-level task of artificial gaze control for gender recognition in static natural images. To investigate the advantage of sensory–motor coordination, we compare a non-situated model of gaze control with a situated model. The non-situated model is incapable of sensory–motor coordination. It shifts the gaze according to a fixed set of locations, optimised by an evolutionary algorithm. The situated model determines gaze shifts on the basis of local inputs in a visual scene. An evolutionary algorithm optimises the model’s gaze control policy. In the experiments performed, the situated model outperforms the non-situated model. By adopting a Bayesian framework, we show that the mechanism of sensory–motor coordination is the cause of this performance difference. The essence is that the mechanism maximises task-specific information in the observations over time, by establishing dependencies between multiple actions and observations.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,