کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
350882 | 618458 | 2013 | 5 صفحه PDF | دانلود رایگان |
• The study compared crowd-sourced and social media recruits to in-lab participants.
• In-lab participants completed a behavioral (not computer-based) task.
• Online participants completed an adapted version of the behavioral task.
• Results found online adaptation to be highly effective; responses were equivalent.
• Crowd-sourced recruits were significantly, desirably more diverse than others.
Recent and emerging technology permits psychologists today to recruit and test participants in more ways than ever before. But to what extent can behavioral scientists trust these varied methods to yield reasonably equivalent results? Here, we took a behavioral, face-to-face task and converted it to an online test. We compared the online responses of participants recruited via Amazon’s Mechanical Turk (MTurk) and via social media postings on Twitter, Facebook, and Reddit. We also recruited a standard sample of students on a college campus and tested them in person, not via computer interface. The demographics of the three samples differed, with MTurk participants being significantly more socio-economically and ethnically diverse, yet the test results across the three samples were almost indistinguishable. We conclude that for some behavioral tests, online recruitment and testing can be a valid—and sometimes even superior—partner to in-person data collection.
Journal: Computers in Human Behavior - Volume 29, Issue 6, November 2013, Pages 2156–2160