Response to Week 4 due 9/30

Please leave a question or comment by the end of the day Sunday 9/30 on:

McCarthy 2008: 115-136

Dresher 1991

Or anything that comes up class this week.

9 thoughts on “Response to Week 4 due 9/30

  1. Ethan Poole

    From both the chapter and class, I found Recursive Constraint Demotion an extremely useful tool. It provides a clearcut starting point for an analysis, especially if you know the constraints, and/or a good method to ensure that your ranking argument is correct. Finding the appropriate candidates is still however difficult.

    Also from class, our brief discussion of computation models of GEN sparked my interest in the topic. I have added the supplemental readings about computational models of OT from the syllabus to my reading list. Are there any other good articles on the topic, particularly GEN?

    Reply
  2. Joe Pater Post author

    I’ll try to keep coming back the issue of finding useful failed candidates in class discussion.

    On implementing Gen, I’d probably start with the Tesar reading mentioned on the handout, and probably move on to Riggle, before tackling Kartunnen. You may also be interested to know that Riggle and colleagues have implemented his Gen in a program called PyPhon (it’s written in Python). Also, you can implement a Harmonic Serialism Gen relatively easily in OT-Help – we should be able to take look at that this semester.

    Reply
  3. Jon Ander Mendia

    If I understood the chapter correctly, RCD is a learning algorithm above all. If we know the candidates and the constraints, RCD can tell us the ranking of those constraints. Provided that rankings are universal, I guess that we want to say that they are innate too. So we “know” the constraints. GEN is the component that gives us the candidates, so that the learner has the tools to start computing: GEN -> EVAL -> GEN -> EVAL and so on. Then, my question is the following: is RCD part of EVAL? Intuitively it seems so, but crucially RCD needs to know which is the optimal candidate to compare winner~loser pairs. Thus, RCD can only be applied after the computation. I find that somehow confusing, maybe because I know nothing at all about learning systems.

    A final remark: I have found the discussion about ERC helpful to understand the logic behind RCD.

    Reply
  4. Fiona Dixon

    The second part of chapter two mentioned 3 concepts that were helpful in the development of my understanding of OT: Harmonic Bounding, RCD, and the Richness of the Base. We’ve already discussed Harmonic Bounding in class, but I found his examples useful. I like RCD because it provides a quicker way of doing OT analysis, even though it is somewhat limited. The richness of the base reaffirms that ranking arguments are the only tools used in OT to show language differences.
    With what i’ve read so far I feel as though I have a fairly good handle on OT analysis, I particularly like the concept of challenging your own proposed rankings by looking for losing candidates that would suffice just as well. The only question that remains with me is how we choose the input form of a word. I understand that the wining candidate is the candidate that undergoes phonetic realization, and that a losing candidate is any other possible candidate that was not chosen, but how do know which form is the underlying form?
    In the set of comments from last time Professor Pater said that “the input defines the set of representations that are competing with one another” in other words this would be the general list of losing candidates. I like and agree with this definition, but what i’m curious about is the ONE candidate that is usually listed as the phonemic form of the word(it’s underlying representation). According to McCarthy the richness of the base does not allow for absurd underlying representations. The fact that *[Å‹kæt] violates Max does not make it the underlying representation of [kæt] He writes that the underlying representation of [kæt] is /kæt/ of course (pg 93). My question is, what makes this underling representation so obvious? and what is the process in choosing underlying representations since only the phonetic forms are available to us.

    Reply
  5. Yohei Oseki

    The second half of Ch.2 (pp. 115-136) in McCarthy (2008) is divided into two parts: 2.11 “Constraint Ranking by Algorithm and Computer” and 2.12 “The Logic of Constraint Ranking and Its Uses”.
    The former focuses mainly on a constraint demotion procedure, especially Recursive Constraint Demotion (RCD). This section clearly illustrates both how to construct a step-by-step argumentation with shading and removal in tableaux and when to adopt RCD in doing OT analyses (i.e. iff we know sets of winner-loser pairs and constraints beforehand). In addition, it also exposes two limitations of RCD: inconsistency detection and stratified partial ordering. Realizing deficiencies in this technique in advance is highly important to achieve successful phonological analyses in the OT framework (e.g. RCD ≠ ranking argument in stratification, etc.). Furthermore, applications for learning theory and computer programing may be promising.
    The latter section attempts to answer the question “[w]hich winner-loser pairs supply the most information about constraint ranking” (p. 124). The solution is derived from Elementary Ranking Condition (ERC) based on entailment relations in informativeness and harmonic bounding. Also ERCs are useful in getting the overall constraint ranking by ERC fusion. Interestingly, these logical aspects in OT have formalized by Prince (2002) as follows: W-extension, L-retraction / L dominance, e identity, Self-identity.
    Given this, I have one comment and one question:
    [Comment] I think that this logical aspect of the OT framework shows its restrictiveness and rigorousness as a scientific theory. Especially and importantly, as Joe also replied to my question, ERC entailment rules (including harmonic bounding) naturally constraint infinite candidates generated by GEN component without stipulations.
    [Question] I’m wondering whether both ERC entailments rules and ERC fusion rules are psychologically real or mere techniques in doing OT analyses. If the former is the case, in which component(s)/level(s) are these rules applied (e.g. GEN, CON or EVAL)? If the latter is true, are they mathematically true? Because I have a mathematical background, I’m just curious about it. I will read Prince (2002) to make sure that these rules are mathematically proven.

    Reply
  6. Covadonga Sanchez

    The second part of Chapter two continues to be really helpful at providing us with new techniques for the analysis of data. One of the main objectives of the sections included in the last part of the chapter is to present the most common problems that can be encounter when analyzing data using the OT framework.One of this problems is inconsistency and RCD seems to be a great method to detect this problem, especially if it is used along with ERC’s. Combining both of them, it seems much easier to locate the source of inconsistency and move on with the analysis.
    This methods are just the result of applying some logic to the data presented in the tableaux and they can also help at presenting tableaux in which only the most informative pairs are considered.
    RCD seems to be the most natural procedure to get a clue of how constraints are ranked in a language. In this chapter, McCarthy timidly points to the fact that the procedure used by babies when acquiring their language may be similar to RCD. If this is true, and children can access all the range of possible constraints and then decide which ones apply in their language, what happens with second language acquisition? Can learners access all of the possible constraints? What makes so difficult to lose the accent, for example? What strategy do they/we use? Is RCD a possible answer for my questions? My intuition is that the start point, however, is not the set of all possible constraints, but a pre-stablished set, that of the native language but how do we change it?

    Reply
  7. Megan

    I have some thoughts in response to Jon Ander and Yohei’s comments above.

    EVAL and RCD/ERC actually operate in almost opposite ways. EVAL’s job is to decide the optimal candidate given a constraint ranking, whereas RCD/ERC are used to rank the constraints given an optimal candidate.

    Additionally, as I understand it, RCD is not part of the theory of OT but simply an algorithm to implement an analysis of an existing dataset under OT (with an implied existing constraint ranking). As such, we can’t necessarily talk about RCD as a component part of the OT framework. The same goes for ERC since, similarly, its purpose is to derive a ranking from known winner/loser pairs.

    Is this the case, or am I misunderstanding? This is definitely an important point, so I’d like to be clear on it either way.

    Reply
  8. araikhli

    When RCD was introduced in class, I was immediately infatuated with such an elegant tool for analysis. What I couldn’t accept without further discussion, however, is that RCD accurately represents the way that a learner processes language. The Dresher reading put me back at ease, regarding learnability, with its parametric theory that was introduced to me in Tom Roerper’s acquisition class 3 long years ago in undergrad. I found myself drawing extensions from the parametric theory of acqusition described in Dreshel to RCD, hoping to fit the latter into the former.

    Here are some crucial points from Dreshel and my attempts to apply them to RCD:

    First off, a list of ordered binary parameters is dangerously similar to OT’s constraint ranking. Specifically, I have been puzzling over constraint selection in OT: why a constraint against codas (*Coda) and not a constraint for codas (Coda), while the constraint regarding onsets is a constraint enforcing onsets and not outlawing them. And if there is a constraint *Coda, will there also exist a constraint Coda, in the same language, in the same ranking. In this case, if *Coda dominates Coda, would this not represent a binary constraint? I understand the descriptive and functional advantages of OT over the parametric theory, but has anybody set down to rewrite the old parameters as constraints?

    Last thursday we talked about the Minimality parameter which tells us whether or not to parse a monosyllabic foot. To express this parameter in OT, we had two constraints, Ft-Bin (assign a violation mark to monosyllabic foot) and Parse-Syl (assign a violation mark to a syllable that is not parse into a foot). My question is a question of notation – why not create express the Minimality concept with Parse-Syl and *Parse-Syl? The questions is kind of silly, but I am interested into whether the transition from the parametric theory to OT was fluid or nihilistic.

    Now to things more pertinent to RCD and ERC. The more familiar parametric theory assumes the process of incremental learning, where a cue in the learner’s data can change the setting of the parameters. Each introduced inconsistency would trigger a revision. In case of RCD, an inconsistency would stall the mechanism and the learner would need to parse each batch of data separately. Is there current proof that favors a batch learner over the incremental learner?

    Reply
  9. Hsin-Lun Huang

    I also think that both the RCD and ERC are very useful instruments for constructing a viable ranking of constraints. Because normally the situation we run into would be having to analyze some amount of linguistic data. This is when the RCD and ERC come in very handy. And we are almost guaranteed solid ground for our ranking argument. Recall how we first started building the ranking of constrints. We need to find data that put two constraints in conflict. After trying all the hierarchical possibilities of the constraints, we can finally come up with a correct ranking to account for all the data.

    Now the RCD and ERC just seem so convenient. With them, we are able to see inconsistency in constraint ranking, harmonic bounding relationship between candidates and entailment relations between violations; we can even use some method like ERC fusion to eliminate loser candidates. All these things lead us to our ultimate goal, a perfect ranking of constraints. I am now completely in favor of using the RCD and ERC to build up my OT analysis because it seems so failure-proof. But on second thought, everything comes with a price. Can we trust the RCD and ERC whole-heartedly? I just really wonder if there is any possibility that some bad result could happen after the application of the RCD and ERC if one simply relies on them to do OT analysis.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *