Article ID Journal Published Year Pages File Type
401972 International Journal of Human-Computer Studies 2012 11 Pages PDF
Abstract

We consider the scenario in which an automatic classifier (previously built) is available. It is used to classify new instances but, in some cases, the classifier may request the intervention of a human (the oracle), who gives it the correct class. In this scenario, first it is necessary to study how the performance of the system should be evaluated, as it cannot be based solely on the predictive accuracy obtained by the classifier but it should also take into account the cost of the human intervention; second, studying the concrete circumstances under which the classifier decides to query the oracle is also important. In this paper we study these two questions and include also an experimental evaluation of the different proposed alternatives.

► Study of methods to evaluate the performance of interactive classifiers. ► Study of strategies to decide when the classifier should ask the oracle. ► Extensive experimental study with several base classifiers and many databases. ► Experimental results confirming suitability of interactive classifiers. ►Experimental results selecting the best decision strategies.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,