Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6900857 | Procedia Computer Science | 2018 | 7 Pages |
Abstract
Most artificial general intelligence (AGI) system developers have been fo-cused upon intelligence (the ability to achieve goals, perform tasks or solve problems) rather than motivation (*why* the system does what it does). As a result, most AGIs have an unhuman-like, and arguably dangerous, top-down hierarchical goal structure as the sole driver of their choices and actions. On the other hand, the independent core observer model (ICOM) was specifically designed to have a human-like “emotional” motivational system. We report here on the most recent versions of and experiments upon our latest ICOM-based systems. We have moved from a partial implementation of the abstruse and overly complex Wilcox model of emotions to a more complete implemen-tation of the simpler Plutchik model. We have seen responses that, at first glance, were surprising and seemingly illogical - but which mirror human re-sponses and which make total sense when considered more fully in the context of surviving in the real world. For example, in “isolation studies”, we find that any input, even pain, is preferred over having no input at all. We believe that the fact that the system generates such unexpected but “humanlike” behavior to be a very good sign that we are successfully capturing the essence of the on-ly known operational motivational system.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
David J. Kelley, Mark R. Waser,