Article ID Journal Published Year Pages File Type
461504 Journal of Systems and Software 2016 13 Pages PDF
Abstract

•Publication and researcher bias is common in software engineering experiments.•Our model shows how these biases lead to a high proportion of incorrect results.•Increased statistical power is a key factor to improve the trustworthiness.

ContextThe trustworthiness of research results is a growing concern in many empirical disciplines.AimThe goals of this paper are to assess how much the trustworthiness of results reported in software engineering experiments is affected by researcher and publication bias, given typical statistical power and significance levels, and to suggest improved research practices.MethodFirst, we conducted a small-scale survey to document the presence of researcher and publication biases in software engineering experiments. Then, we built a model that estimates the proportion of correct results for different levels of researcher and publication bias. A review of 150 randomly selected software engineering experiments published in the period 2002–2013 was conducted to provide input to the model.ResultsThe survey indicates that researcher and publication bias is quite common. This finding is supported by the observation that the actual proportion of statistically significant results reported in the reviewed papers was about twice as high as the one expected assuming no researcher and publication bias. Our models suggest a high proportion of incorrect results even with quite conservative assumptions.ConclusionResearch practices must improve to increase the trustworthiness of software engineering experiments. A key to this improvement is to avoid conducting studies with unsatisfactory low statistical power.

Related Topics
Physical Sciences and Engineering Computer Science Computer Networks and Communications
Authors
, , , ,