Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
9951938 | Evaluation and Program Planning | 2018 | 15 Pages |
Abstract
At its core, evaluation involves the generation of value judgments. These evaluative judgments are based on comparing an evaluand's performance to what the evaluand is supposed to do (criteria) and how well it is supposed to do it (standards). The aim of this four-phase study was to test whether criteria and standards can be set via crowdsourcing, a potentially cost- and time-effective approach to collecting public opinion data. In the first three phases, participants were presented with a program description, then asked to complete a task to either identify criteria (phase one), weigh criteria (phase two), or set standards (phase three). Phase four found that the crowd-generated criteria were high quality; more specifically, that they were clear and concise, complete, non-overlapping, and realistic. Overall, the study concludes that crowdsourcing has the potential to be used in evaluation for setting stable, high-quality criteria and standards.
Related Topics
Health Sciences
Medicine and Dentistry
Public Health and Health Policy
Authors
Elena Harman, Tarek Azzam,