Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4943034 | Expert Systems with Applications | 2018 | 42 Pages |
Abstract
Learning from data streams (incremental learning) is increasingly attracting research focus due to many real-world streaming problems and due to many open challenges, among which is the detection of concept drift - a phenomenon when the data distribution changes and makes the current prediction model inaccurate or obsolete. Current state-of-the art detection methods can be roughly split into performance monitoring algorithms and distribution comparing algorithms. In this work we propose a novel concept drift detector that can be combined with an arbitrary classification algorithm. The proposed concept drift detector is based on computing multiple model explanations over time and observing the magnitudes of their changes. The model explanation is computed using a methodology that yields attribute-value contributions for prediction outcomes and thus provides insight into the model's decision-making process and enables its transparency. The evaluation has revealed that the methods surpass the baseline methods in terms of concept drift detection, accuracy, robustness and sensitivity. To even further augment interpretability, we visualized the detection of concept drift, enabling macro and micro views of the data.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Jaka DemÅ¡ar, Zoran BosniÄ,