Less is more: why all paradigms are defective, and why that is a good thing

Only a fraction of lexemes are encountered in all their paradigm forms in any corpus or even in the lifetime of any speaker. This raises a question as to how it is that native speakers confidently produce and comprehend word forms that they have never witnessed. We present the results of an experiment using a recurrent neural network computational learning model. In particular, we compare the model’s production of unencountered forms using two types of training data: full paradigms vs. single word forms for Russian nouns, verbs, and adjectives. In the long run, the model displays better performance when exposed to the more naturalistic training on single word forms, even though the other training data is much larger as it includes full paradigms for each and every word. We discuss why “defective” paradigms may be better for human learners as well.


Publication Date:
Jun 21 2019
Date Submitted:
Jul 12 2019
ISSN:
1613-7027
Citation:
Corpus Linguistics and Linguistic Theory

Note: The file is under embargo until: 2020-06-21



 Record created 2019-07-12, last modified 2019-07-12

Fulltext:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)