Article ID Journal Published Year Pages File Type
436879 Theoretical Computer Science 2013 13 Pages PDF
Abstract

We define and study a learning paradigm that sits between identification in the limit and classification. More precisely, we expect a learner to determine in the limit which members of a finite set D of possible data belong to a target language L, where D is arbitrary. So as D becomes larger and larger, the task becomes closer and closer to identifying L. But as D is always finite and L can be infinite, it can still be expected that Ex- and BC-learning are often more difficult than performing this classification task. The paper supports this intuition and makes it precise, taking into account desirable constraints on how the learner behaves, such as bounding the number of mind changes and being conservative. Special attention is given to various forms of consistency. In particular, we might not only require consistency between the members of D to classify, the current data σ and a language L, but also consistency between larger sets of possible data to classify (supersets of D) and the same σ and L: whereas in the classical paradigms of inductive inference or classification, only the available data can grow, here both the available data and the set of possible data to classify can grow. We provide a fairly comprehensive set of results, many of which are optimal, that demonstrate the fruitfulness of the approach and the richness of the paradigm.

Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics