کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
888642 | 913558 | 2013 | 13 صفحه PDF | دانلود رایگان |
When and to what extent should forecasts rely on linear model or human judgment? The judgmental forecasting literature suggests that aggregating model and judge using a simple 50:50 split tends to outperform the two inputs alone. However, current research disregards the important role that the structure of the task, judges’ level of expertise, and the number of individuals providing a forecasting judgment may play. Ninety-two music industry professionals and 88 postgraduate students were recruited in a field experiment to predict chart entry positions of pop music singles in the UK and Germany. The results of a lens model analysis show how task structure and domain-specific expertise moderate the relative importance of model and judge. The study also delineates an upper boundary to which aggregating multiple judgments in model-expert combinations adds predictive accuracy. It is suggested that ignoring the characteristics of task and/or judge may lead to suboptimal forecasting performance.
► Empirical study of combined model-judge forecasts in a unique field setting.
► Interactive effect of task structure and expertise on forecasting effectiveness.
► Reconciles contradictory findings on predictive accuracy of models versus judges.
► New insights into the optimal split assigned to model and manager inputs.
► Empirical evidence of the value of aggregating judgments.
Journal: Organizational Behavior and Human Decision Processes - Volume 120, Issue 1, January 2013, Pages 24–36