Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
427119 | Information and Computation | 2011 | 19 Pages |
Goldʼs original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, and, for some presentation of some object in the class, requires less of that presentation in order to identify that object. Beick considered this notion in the context of function learning, and gave an intuitive characterization of an optimal function learner. Jantke and Beick subsequently characterized the classes of functions that are algorithmically, optimally identifiable.Herein, Goldʼs notion is considered in the context of language learning. It is shown that a characterization of optimal language learners analogous to Beickʼs does not hold. It is also shown that the classes of languages that are algorithmically, optimally identifiable cannot be characterized in a manner analogous to that of Jantke and Beick.Other interesting results concerning optimal language learning include the following. It is shown that strong non-U-shapedness, a property involved in Beickʼs characterization of optimal function learners, does not restrict algorithmic language learning power. It is also shown that, for an arbitrary optimal learner F of a class of languages L, F optimally identifies a subclass K of L iff F is class-preserving with respect to K.