Article ID Journal Published Year Pages File Type
6866565 Neurocomputing 2014 13 Pages PDF
Abstract
We investigate the role of redundancy for exploratory learning of inverse functions, where an agent learns to achieve goals by performing actions and observing outcomes. We present an analysis of linear redundancy and investigate goal-directed exploration approaches, which are empirically successful, but hardly theorized except negative results for special cases, and prove convergence to the optimal solution. We show that the learning curves of such processes are intrinsically low-dimensional and S-shaped, which explains previous empirical findings, and finally compare our results to non-linear domains.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,