Here‘s a paper by Jason Mattauch on binding, bidirectional OT and iterated learning that might be worth taking a look at. I should also mention that there is some local interest in bidirectional OT: Jill de Villiers has used it to explain some challenging comprehension/production asymmetries in child syntax.
Cable on iterated learning
Please post comments on Seth’s paper here.
Learning similarity structure in an analogical model
Here‘s the paper I mentioned in class by Daelemans and colleagues on modeling the learning of Dutch stress. See the section on information gain on how they learn which features are generally predictive.
Comments on Week 8: Albright
Comments on Week 8: Daland et al.
Please post comments here.
More on vowel category learning
I came across two more papers that are very much worth taking a look at if you are interested in modeling vowel category learning. This paper by Schwartz and colleagues proposes a model of speech perception with what looks like an interesting connection between production and perception. Relevant to our concerns, it has a brief discussion to feature economy in vowels, with an early Ohala reference, and also some fascinating data on speaker-to-speaker variation in vowel height boundaries (which is correlated with production differences!) This paper by Sonderegger and Yu proposes a Bayesian analysis of compensation for coarticulation, and discusses some possible extensions to the modeling of change at the end. It builds on the work by Feldman and colleagues that I mentioned in class.
Feature economy in vowels?
Seth Cable and Brian Dillon had some interesting comments and questions about feature economy in vowels. I’ve pasted in the discussion below, and just want to add some references here. In a paper in a recent volume, Mackie and Mielke find that feature economy holds of vowel systems that emerge from simulations, even when explicit features aren’t used (!). That paper cites de Boer (2000) on modeling the emergence of vowel inventories, and other work in this vein is cited in the Boersma and Hamann paper we are reading this week (see also B&H for plenty of relevant discussion on dispersion). Brian brings up Mixture of Gaussian models of vowel category learning (see this week’s Kirby paper for further references). I suspect that the intersection between that work and iterated learning that he suggests below hasn’t really been explored yet. Here‘s Brian and colleagues’ Inuktitut paper, and here is a paper that talks about iterated language learning in a Bayesian framework. Also, here is a paper that derives a non-linguistic simplicity bias from a maximally uncommitted prior, and here is a paper that talks about iterated learning using a stipulated simplicity prior (the paper that some of us read with Micheal Lavine of Statistics last year).
*****
From Seth: It occurred to me randomly today that vowel systems aren’t generally model pictures of feature economy. Rather, folks tend to think that there’s strong pressure to keep vowels maximally distinct, leading ideally to a three-vowels system of [high front unrounded], [mid low unrounded] [high back rounded].
In fact, my hunch is that its comparatively rare for vowel systems to be perfectly economical wrt features, so that the language exhibits every possible combination of [+/- front], [+/- high], [+/- round].
If that’s right, what’s going on here? Is there some kind of countervailing pressure to keep vowels distinct? How can we model the interaction between these two pressures?
****
From Brian:
If that’s right, what’s going on here? Is there some kind of countervailing pressure to keep vowels distinct?
How can we model the interaction between these two pressures?
******
From Seth: One quick thought about using dispersion theory, though, is that it seems like there might be a challenge encoding the inputs, just because dispersion theory constraints evaluate entire inventories, whereas feature economy was emerging as a result of learning individual segments…
******
From Brian:
Week 2 comments
Please post comments or questions for the papers in Week 2 here by the end of the day Sunday (Jan. 29th).
Welcome!
This is the blog for Ling 754: Topics in Diachronic Linguistics: Computational modeling of language change, led by Michael Becker, Alice Harris and Joe Pater. We will meet in South College 301 (the Partee room) on Tuesdays and Thursdays from 1-2:15 (first meeting Tuesday, January 24th). A short overview can be found to the left. All are welcome to attend. We already have three guest presenters lined up (Rajesh Bhatt, Seth Cable and Katya Pertsova), so it should be an exciting semester!