Article ID Journal Published Year Pages File Type
485314 Procedia Computer Science 2013 11 Pages PDF
Abstract

We examine a model of human causal cognition, which generally deviates from normative systems such as classical logic and probability theory. For two-armed bandit problems, we demonstrate the efficacy of our loosely symmetric model (LS) and its implementation of two cognitive biases peculiar to humans: symmetry and mutual exclusivity. Specifically, we use LS as a simple value function within the framework of reinforcement learning. The resulting cognitively biased valuations precisely describe human causal intuitions. We further show that operating LS under the simplest greedy policy yields superior reliability and robustness, even managing to overcome the usual speed-accuracy trade-off, and effectively removing the need for parameter tuning.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)