کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
435651 | 689922 | 2010 | 16 صفحه PDF | دانلود رایگان |

In the inductive inference framework of learning in the limit, a variation of the bounded example memory () language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be stored, but also in how long those data may be stored without being refreshed. More specifically, the model requires that, if the learner commits an example x to memory, and x is not presented to the learner again thereafter, then eventually the learner forgets x, i.e., eventually x no longer appears in the learner’s memory. This model is called temporary example memory () learning.Many interesting results concerning the -learning model are presented. For example, there exists a class of languages that can be identified by memorizing k+1 examples in the sense, but that cannot be identified by memorizing k examples in the sense. On the other hand, there exists a class of languages that can be identified by memorizing just one example in the sense, but that cannot be identified by memorizing any number of examples in the sense.Results are also presented concerning the special case of learning classes of infinite languages.
Journal: Theoretical Computer Science - Volume 411, Issues 29–30, 17 June 2010, Pages 2757-2772