Article ID Journal Published Year Pages File Type
413245 Robotics and Autonomous Systems 2011 16 Pages PDF
Abstract

In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7–10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.

► Unsupervised learning of affordances through interaction and observation. ► Use of learned affordances in planning without pre-defined transition rules. ► Prediction of states and effects in the same perceptual space. ► Novel hierarchical clustering method for non-homogeneous feature spaces. ► Linking of results with animal development, imitation and planning.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,