Article ID Journal Published Year Pages File Type
4937019 Computers in Human Behavior 2017 14 Pages PDF
Abstract
Social science researchers increasingly recruit participants through Amazon's Mechanical Turk (MTurk) platform. Yet, the physical isolation of MTurk participants, and perceived lack of experimental control have led to persistent concerns about the quality of the data that can be obtained from MTurk samples. In this paper we focus on two of the most salient concerns-that MTurk participants may not buy into interactive experiments and that they may produce unreliable or invalid data. We review existing research on these topics and present new data to address these concerns. We find that insufficient attention is no more a problem among MTurk samples than among other commonly used convenience or high-quality commercial samples, and that MTurk participants buy into interactive experiments and trust researchers as much as participants in laboratory studies. Furthermore, we find that employing rigorous exclusion methods consistently boosts statistical power without introducing problematic side effects (e.g., substantially biasing the post-exclusion sample), and can thus provide a general solution for dealing with problematic respondents across samples. We conclude with a discussion of best practices and recommendations.
Related Topics
Physical Sciences and Engineering Computer Science Computer Science Applications
Authors
, ,