Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6835840 | Computers in Human Behavior | 2018 | 45 Pages |
Abstract
The large body of literature on the comparability of mean scores for self-report survey responses gathered using paper-and-pencil and computer data collection methodologies has yielded inconclusive results. However, no comprehensive meta-analysis has been conducted in this field, and those that are available for specific measures have typically not differentiated between studies using between-groups and within-subjects designs. Also, few individual studies, and no meta-analyses, have used correct statistical procedures to determine the equivalence of the two methodologies. Consequently, we conducted two meta-analyses assessing quantitative equivalence (i.e., mean scores), with the first consisting of 144 independent effect sizes from studies with between-groups designs and the second including 70 independent effect sizes from studies using within-subjects designs. Both meta-analyses assessing mean scores indicated equivalence across conditions, with large heterogeneity of variance in the between-groups analysis. Presence of others in both the paper-and-pencil and computer conditions accounted for a significant portion of this variance. Heterogeneity of variance was small for the within-subjects design analysis. Overall, results indicated that the mean scores for self-report surveys using paper-and-pencil and the computer are comparable, although heterogeneity differs for the study designs. Equivalence testing was demonstrated to be the recommended statistical procedure for this type of research.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science Applications
Authors
Arne Weigold, Ingrid K. Weigold, Sara N. Natera,