Un-learning Un-Prefixation Errors

Abstract

A simple three-layer feed-forward network was trained to classify verbs as reversible with un- (e.g., unpack) reversible with dis- (e.g., disassemble) or non-reversible (e.g., squeeze), on the basis of their semantic features. The aim was to model a well-known phenomenon whereby children produce, then subsequently retreat from, overgeneralization errors (e.g., *unsqueeze). The model learned to correctly classify both the verbs in the training set and verbs held back during training (demonstrating generalization). The model demonstrated overgeneralization (e.g., predicting unsqueeze for squeeze) and subsequent retreat, and was able to predict adult acceptability judgments of the different un- forms.


Back to Table of Contents