Article ID Journal Published Year Pages File Type
5041483 Cognition 2017 17 Pages PDF
Abstract

•We study a mathematical model for how people can acquire moral theories.•Moral theories specify how other agents' utilities should be valued in one's own utility.•Hierarchical Bayes is used to infer moral theories from sparse, noisy observations.•Learners set their moral theories to be consistent externally and internally.•Simulations show moral change and describe conditions for expanding moral circles.

We introduce a computational framework for understanding the structure and dynamics of moral learning, with a focus on how people learn to trade off the interests and welfare of different individuals in their social groups and the larger society. We posit a minimal set of cognitive capacities that together can solve this learning problem: (1) an abstract and recursive utility calculus to quantitatively represent welfare trade-offs; (2) hierarchical Bayesian inference to understand the actions and judgments of others; and (3) meta-values for learning by value alignment both externally to the values of others and internally to make moral theories consistent with one's own attachments and feelings. Our model explains how children can build from sparse noisy observations of how a small set of individuals make moral decisions to a broad moral competence, able to support an infinite range of judgments and decisions that generalizes even to people they have never met and situations they have not been in or observed. It also provides insight into the causes and dynamics of moral change across time, including cases when moral change can be rapidly progressive, changing values significantly in just a few generations, and cases when it is likely to move more slowly.

Related Topics
Life Sciences Neuroscience Cognitive Neuroscience
Authors
, , ,