Article ID Journal Published Year Pages File Type
931970 Journal of Memory and Language 2013 24 Pages PDF
Abstract

Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F1 and F2 tests, and in many cases, even worse than F1 alone. Maximal LMEMs should be the ‘gold standard’ for confirmatory hypothesis testing in psycholinguistics and beyond.

► Demonstrates that common ways of specifying random effects in linear mixed-effects models are flawed. ► Uses Monte Carlo simulation to compare performance of linear mixed-effects models to traditional approaches. ► Provides a set of suggested “best practices” for linear mixed-effects models in confirmatory analyses.

Related Topics
Life Sciences Neuroscience Cognitive Neuroscience
Authors
, , , ,