Document Type

Article

Publication Date

2011

Abstract

A variant of iterative learning in the limit (cf. Lange and Zeugmann 1996) is studied when a learner gets negative examples refuting conjectures containing data in excess of the target language and uses additional information of the following four types: (a) memorizing up to n input elements seen so far; (b) up to n feedback memberships queries (testing if an item is a member of the input seen so far); (c) the number of input elements seen so far; (d) the maximal element of the input seen so far.We explore how additional information available to such learners (defined and studied in Jain and Kinber 2007) may help. In particular, we show that adding the maximal element or the number of elements seen so far helps such learners to infer any indexed class of languages class-preservingly (using a descriptive numbering defining the class)—as it is proved in Jain and Kinber (2007), this is not possible without using additional information. We also study how, in the given context, different types of additional information fare against each other, and establish hierarchies of learners memorizing n + 1 versus n input elements seen and n + 1 versus n feedback membership queries.

Comments

Jain, Sanjay and Efim Kinber. "Iterative Learning from Texts and Counterexamples using Additional Information." Machine Learning 84 (2011): 291–333.

DOI 10.1007/s10994-011-5238-7


Share

COinS