The nearly final version of our Phonological Concept Learning paper, to appear in Cognitive Science, is now available here. The abstract is below, and we very much welcome further discussion, either by e-mail to the authors (addresses on the first page of the paper), or as comments to this post.
Abstract.
Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a com- parative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models.
We test GMECCS, an implementation of the Configural Cue Model (Gluck & Bower, 1988a) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, 2003; Hayes & Wilson, 2008) with a single free parameter, against the alternative hypothesis that learners seek featurally- simple algebraic rules (“rule-seeking”). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins (1961) (“SHJ”), instantiated as both phonotactic patterns and visual analogues, using unsupervised training.
Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can fa- cilitate rule-seeking in visual learning) to elicit simple-rule-seeking phonotactic learning, but cue-based behavior persisted.
We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other.