Article ID Journal Published Year Pages File Type
4943009 Expert Systems with Applications 2018 42 Pages PDF
Abstract
A truly autonomous artificial intelligence agent should be able to drive its own learning process. That is, decide what to explore and what to learn, identifying what constitutes potential useful data as examples of concepts or what strategy to follow to solve a new task. Different efforts have been developed in machine learning towards this aim. Approaches that introduce new concepts, like predicate invention in Inductive Logic Programming (ILP) techniques, normally require the selection of examples by the user. Techniques that learn behavior policies through exploration like Reinforcement Learning (RL) with intrinsic motivation, to guide the agent into interesting areas to discover new goals, assume that all the states and actions are predefined in advance. In this paper, we describe a system, called ADC, that combines techniques from ILP with predicate invention and RL with intrinsic motivation to discover new concepts, states and actions to learn behavior policies. ADC drives its own learning process, collecting its own examples for autonomously learning concepts. These new concepts can be used to describe its environment and define new states and actions used to learn behaviors to solve tasks. We show the effectiveness of our approach in simulated robotics environments.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,