Article ID Journal Published Year Pages File Type
8110231 Renewable and Sustainable Energy Reviews 2018 28 Pages PDF
Abstract
The performance evaluation of forecasting algorithms is an essential requirement for quality assessment and model comparison. In recent years, algorithms that issue predictive distributions rather than point forecasts have evolved, as they better represent the stochastic nature of the underlying numerical weather prediction and power conversion processes. Standard error measures used for the evaluation of point forecasts are not sufficient for the evaluation of probabilistic forecasts. In comparison to deterministic error measures, many probabilistic scoring rules lack intuition as they have to satisfy a number of requirements such as reliability and sharpness, whereas deterministic forecasts only need to be close to the actual observations. This article aims to empower practitioners and users of probabilistic forecasts to be able to choose appropriate uncertainty representations and scoring rules depending on the desired application and available data. A holistic view of the most popular forms of uncertainty representation from single forecasts and ensembles is given, followed by a presentation of the most popular scoring rules. We want to broaden the understanding for the working principles and relationship of different scoring rules and their decomposition for probabilistic forecasts of continuous variables by showing their differences. Therefore, we analyze the behavior of scoring rules, a process frequently referred to as metaverification, in detail on real-world multi-model ensemble forecasts in a number of case studies.
Related Topics
Physical Sciences and Engineering Energy Renewable Energy, Sustainability and the Environment
Authors
, , ,