Article ID Journal Published Year Pages File Type
6874612 Journal of Computational Science 2015 13 Pages PDF
Abstract
In general species such as mammals must learn from their environment to survive. Biologists theorize that species evolved over time by ancestors learning the best traits, which allowed them to propagate more than their less effective counterparts. In many instances learning occurs in a competitive environment, where a species evolves alongside its food source and/or its predator. We propose an agent-based model of predators and prey with co-evolution through linear value function Q-learning, to allow predators and prey to learn during their lifetime and pass that information to their offspring. Each agent learns the importance of world features via rewards they receive after each action. We are unaware of work that studies co-evolution of predator and prey through simulation such that each entity learns to survive within its world, and passes that information on to its progeny, without running multiple training runs. We show that this learning results in a more successful species for both predator and prey, and that variations on the reward function do not have a significant impact when both species are learning. However, in the case where only a single species is learning, the reward function may impact the results, although overall improvements to the system are still found. We believe that our approach will allow computational scientists to simulate these environments more accurately.
Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, ,