Learning Families of Formal Languages from Positive and Negative Information

01/31/2018
by   Martin Aschenbach, et al.
0

For 50 years, research in the area of inductive inference aims at investigating the learning of formal languages and is influenced by computability theory, complexity theory, cognitive science, machine learning, and more generally artificial intelligence. Being one of the pioneers, Gold investigated the most common formalization, learning in the limit both from solely positive examples as well as from positive and negative information. The first mode of presentation has been studied extensively, including insights in how different additional requirements on the hypothesis sequence of the learner or requested properties of the latter itself, restrict what collections of languages are learnable. We focus on the second paradigm, learning from informants, and study how imposing different restrictions on the learning process effects learnability. For example, we show that learners can be assumed to only change their hypothesis in case it is inconsistent with the data (such learners are called conservative). Further, we give a picture of how the most important learning restrictions relate. Our investigations underpin the claim for delayability being the right structural property to gain a deeper understanding concerning the nature of learning restrictions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset