Please leave comments on the papers for weeks 12-13 or related topics here, by end of the day Sunday the 25th at the latest.
Please leave comments on the papers for weeks 12-13 or related topics here, by end of the day Sunday the 25th at the latest.
Hayes et al. 2009 provided me with a slightly better understanding of how phonological experiments can be used to complement traditional “pen and paper” methods to linguistics. However, two methodologies in the experiment struck me as prone to error: (1) using Google search results to form typological predictions and (2) conducting an experiment entirely online as opposed to in-person. I also had trouble understanding some of the statistics in the paper; I looked many things up, but I still don’t necessarily understand how and why everything works.
Hayes et al. (2009) presents a study on Hungarian vowel harmony in order to refute the strong UG stance adopted by Becker and colleagues in their study on Turkish final devoicing and its patterns of alternation or non alternation asymmetrically. They consider that the law of frequency matching, which states that “speakers of languages with variable lexical patterns respond stochastically when tested on such patterns and consequently, their responses aggregately match the lexical frequenciesâ€, is obeyed only for phonological patterns existent in UG, whereas other patterns can’t be learned. Hayes, however, states that unnatural patterns can be learned, as some diachronic developments show, and their experiment tries to prove it. In a rudimentary analysis, they found that consonants affect vowel harmony in different ways and some, if not all of the contexts, are unnatural. Once these conclusions were verified, the next step was to see the role of these environments in the native speakers’ performance: if the unnatural patterns are learned as easily as the natural. A wug test showed that unnatural patterns were as reliable in determining vowel harmony as natural patterns. With this experiment, Hayes and colleagues obtain different results from previous analysis but their methodology is also quite unusual. They relied on diverse Google tools and they found a considerable amount of participants online. After considering their results they conclude that both natural and unnatural patterns are underlearned but still, this study contributes to show “a learning bias against unnatural constraints†and presents innovative methods of research.
Hayes et al. (2009) claims that based on the distinction between natural and unnatural constraints, unnatural constraints CAN also be learned along inductive biases, which mean that learners prefer natural constraints to unnatural ones. The logic of their argument is very convincing, but when reading this paper two questions arose from the broader perspective.
(1) One question is about acquisition of phonology in general, especially a learnability problem. If we assume that all phonological generalizations/patterns are accounted for in terms of markedness and faithfulness constraints and hence unnatural patterns also have to be reduced to these constraints, how do children acquire them? More specifically, since there can be no negative data available in primary linguistic data, can children identify relevant constraints which are negative in nature? Is frequency enough for children to learn these unnatural patterns? I’m not sure that children or human grammars in general have “counters†in their brain.
(2) Another question is about similarity and difference among natural and unnatural constraints. While natural and universal constraints as posited in OT are genetically determined, unnatural ones are acquired during language acquisition processes. Since it is standardly assumed that sound learning is done by “oblivionâ€, do natural and unnatural constraints/patterns have exactly the same characteristics in nature?
Moreton, 2008 demonstrates that analytic bias alone is strong enough to “create typological asymmetries.” The results of the first experiment in this article (HV vs HH coarticulation) would suggest that a cognitive bias is in someway responsible for the performance of his test subjects (Poor performance in HV condition, good performance in HH condition).
What I liked about this study is that the second experiment addressed some the questions that would naturally develop while reading; namely that patterns involving a single feature might be easier to grasp than those that involve multiple features. or that it might be easier to grasp patterns that occur within the same tier. For this reason, he did a follow up experiment that used HH vs VV conditions. As VV and HV are both typologically rare, there should still be a cognitive bias in favor of HH. He found that HH patterns are somewhat easier to learn than VV ones, but that VV conditions are also easier to learn than HV ones.
Wilson’s paper of velar palatalization addresses a substantive biased in phonological grammar. He describes two approaches to phonology, phonetically ground phonology, much like discussed in class, and evolutionary phonology. I found the cited argument for evolutionary phonology to be compelling: Wilson sites Buckley 2003 as pointing out that children’s do not discriminate between patterns that are phonetically motivated and those that aren’t. Wilson himself settles for a happy median.
Discussing the acoustic specifics of velar palatalization, he mentions that acoustic properties of a velar stop depend on the following vowel and that high vowels tend to front the articulation place of the velar stop. Fronting apparently makes the velar stop more affricate-like. There seems to be a link missing between these two phenomena: velar stop fronting before front vowel and a velar stop’s tendency to affricate before a high vowel. Fronting and height are not related phenomena, and the affricate-like nature of a fronted velar stop seems only coincidentally relevant to its tendency to affricate before high, not front vowels.
I also wonder about Guion’s second implicational law, which in its asymmetry states that is [g] palatelizes to [dz], then [k] is likely to palatalize to [tsh], but not vice versa. The first implicational law has an understandable acoustic biased, but what phonetically ground phenomenon makes the affrication of a voiced velar stop somehow more distinct?
Wilson’s approach does not outlaw a language in which [k]/[g] palatalize before a low vowel and for the purpose of language acquisition his fifty/fifty approach makes the acquisition of palatalizing [k] before [a], but not before [i] possible.
He continues to talk about a mathematical model of perceptual distinctiveness far too complex for my understanding, but what caught my eye is the CRF, or the conditional random fields model as applied to phonology. This model maps an input to an output in a series of corresponding labels and constraints here behave much like functions would. There is also a way within this model to calculate probability of a certain mapping. I wonder to what extend this model is currently used in phonology and whether it can be applied to feature-specific constraint, much like the instances of fusion from Pater 2004.
Hayes et al’s approach to finding unnatural constraints strikes me as potentially problematic: ‘We also checked as many other reasonable-seeming unnatural environments that we could, simply relying on our intuitions of what might work.’ I am struggling to articulate my thoughts on this, but roughly: (a) it seems that relying on one’s intuitions of what is or is not natural is a potential source of trouble, and (b) might it be informative to make a distinction between (i) unnatural but productive processes that arise over a series of historical changes, such as velar softening, and (ii) ‘extremely’ unnatural processes that would not have arisen over time?
I have a difficult time understanding parsimony arguments against UG-based theories like those Moreton discusses in the background section to his paper. One of the causes of my confusion stems from the apparent assumption that redundancy or greater memory load is somehow undesirable: why? This seems like another version of the problem I commented on in my very first post at the beginning of the semester. The second, and probably more serious, cause of my confusion stems from not understanding what the internal architecture or working mechanisms of ‘the phonetics’ are supposed to be in a theory that argues that constraining certain things can be ‘left to the phonetics.’ How is that supposed to work?