Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4937043 | Computers in Human Behavior | 2017 | 37 Pages |
Abstract
Computer-generated faces are increasingly prevalent in a range of settings. While the quality of synthetic face appearance has increased dramatically, participants can usually distinguish between real and artificial faces accurately, and artificial faces carry high-level information regarding categories like gender, species, and agency differently than real faces. Artificial faces are also more poorly remembered than real faces, which is consistent with an “out-group” disadvantage for artificial faces that are probably seen less frequently than real ones. In the current study, we asked if the differences in how real and artificial faces are perceived extended to how trustworthiness is estimated in images of real and computer-generated faces. In two experiments, we examined how absolute ratings of trustworthiness (Exp. 1) and relative trustworthiness judgments (Exp. 2) were affected by whether we presented participants with real faces or artificial faces created using the real faces as models. We found in both tasks that trustworthiness was perceived differently in artificial faces: Absolute trustworthiness ratings were lower for artificial faces, and relative trustworthiness judgments were less accurate for artificial faces as well. Computer-generated faces thus do not signal trustworthiness in the same way that real faces do, which has important practical and theoretical implications for future social cognition research.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science Applications
Authors
Benjamin Balas, Jonathan Pacella,