One-Shot Learners Using Negative Counterexamples and Nearest Positive Examples

Document Type

Conference Proceeding

Publication Date

2007

Abstract

As some cognitive research suggests, in the process of learning languages, in addition to overt explicit negative evidence, a child often receives covert explicit evidence in form of corrected or rephrased sentences. In this paper, we suggest one approach to formalization of overt and covert evidence within the framework of one-shot learners via subset and membership queries to a teacher (oracle). We compare and explore general capabilities of our models, as well as complexity advantages of learnability models of one type over models of other types, where complexity is measured in terms of number of queries. In particular, we establish that “correcting” positive examples give sometimes more power to a learner than just negative (counter)examples and access to full positive data.

Comments

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4754).

International Conference on Algorithmic Learning Theory.

ISBN: 9783540752240

DOI

10.1007/978-3-540-75225-7_22


Share

COinS