Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
9953008 | Journal of Memory and Language | 2018 | 25 Pages |
Abstract
It is well-known in statistics (e.g., Gelman & Carlin, 2014) that treating a result as publishable just because the p-value is less than 0.05 leads to overoptimistic expectations of replicability. These effects get published, leading to an overconfident belief in replicability. We demonstrate the adverse consequences of this statistical significance filter by conducting seven direct replication attempts (268 participants in total) of a recent paper (Levy & Keller, 2013). We show that the published claims are so noisy that even non-significant results are fully compatible with them. We also demonstrate the contrast between such small-sample studies and a larger-sample study; the latter generally yields a less noisy estimate but also a smaller effect magnitude, which looks less compelling but is more realistic. We reiterate several suggestions from the methodology literature for improving current practices.
Related Topics
Life Sciences
Neuroscience
Cognitive Neuroscience
Authors
Shravan Vasishth, Daniela Mertzen, Lena A. Jäger, Andrew Gelman,