Article ID Journal Published Year Pages File Type
430288 Journal of Computer and System Sciences 2012 15 Pages PDF
Abstract

It is well known that in many applications erroneous predictions of one type or another must be avoided. In some applications, like spam detection, false positive errors are serious problems. In other applications, like medical diagnosis, abstaining from making a prediction may be more desirable than making an incorrect prediction. In this paper we consider different types of reliable classifiers suited for such situations. We formalize the notion and study properties of reliable classifiers in the spirit of agnostic learning (Haussler, 1992; Kearns, Schapire, and Sellie, 1994), a PAC-like model where no assumption is made on the function being learned. We then give two algorithms for reliable agnostic learning under natural distributions. The first reliably learns DNFs with no false positives using membership queries. The second reliably learns halfspaces from random examples with no false positives or false negatives, but the classifier sometimes abstains from making predictions.

► Formal models for reliable learning in the agnostic noise setting. ► Reduction from standard agnostic learning to reliable agnostic learning. ► DNF learning algorithm in positive-reliable (one-sided error) setting. ► Algorithm to learn halfspace sandwiches in fully-reliable setting.

Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, , ,