Please post a comment on Bobaljik or McCarthy’s paper.
5 thoughts on “Comments on Optimal Paradigms”
Minta
I’m curious about Boblajik’s claim that syllabification applies cyclically to verbs, but non-cyclically to nouns. Specifically, how would we make this work in a constraint-based theory? That is, where in the machinery would we put this stipulation that distinguishes between verbs and nouns?
My first thought is in the constraint set: we could establish two PARSE-SYLL constraints, one for verbs and one for nouns. In an OT-CC, we can delay syllabification of nouns until the end of the derivation by ranking PARSE-SYLL(N) so far down in the hierarchy that any other change will take precedence. Of course, this ranking would have to be universally fixed. The question is, universally fixed with respect to what? Universally fixed rankings are usually used to establish a relationship among specific constraints, not to give a single constraint an absolute ranking (i.e., “bottom of the hierarchy”). As an alternative, we could ensure that the PREC constraints penalize syllabifying nouns before any making any other change, but since PREC constraints are formalized based on FAITH violations, we’d have to assume that syllabification is somehow unfaithful, despite the fact that there’s no evidence whatsoever for contrastive syllabification.
Implementing this type of analysis in Stratal OT is even more daunting: because stratal OT allows stratum-internal parallelism, within any given stratum, there’s nothing to stop syllabification of nouns demanded by PARSE-SYLL(N) (unless you’re a big believer in *Struc). So the only way Stratal OT can guarantee that nouns don’t get syllabified until the final stratum is to have PARSE-SYLL(N) present only in the final stratum. But again, as far as I know, Stratal OT proposes the same set of constraints in each stratum, so this is another radical move.
So if we can’t put the stipulation that distinguishes nouns from verbs in the constraint set/ranking, can we put it in GEN? That seems even harder to me, since in most analyses I’m familiar with, GEN rules out representations, not the processes that create them. This being the case, to delay syllabification of nouns using GEN, we’d have to say that GEN can’t produce syllabified nouns. Of course, this is surface-false, so what we really want to say is that GEN can’t produce syllabified nouns until the final step/iteration/stratum of the derivation, and that removes a disturbing amount of restrictiveness from GEN.
Is there any hope for implementing Bobaljik’s claim in a constraint-based account?
This is another example of a V-N asymmetry of the sort discussed by Bobaljik, which I think supports his claim that many of these asymmetries can/should be accounted for by referring to hierarchal structure and not reference to other members of the paradigm.
In Turkish, the difference between verbs and nouns can (probably) only be explained as (1) the difference between the hierarchal structure of verbs/nouns or (2) the difference in the cyclic application in rules between verbs and nouns.
In Turkish, a gap results in the context CV-C. A CV root cannot inflect with a -C suffix. This is true for both CV roots that are verbs and CV roots that are nouns. (*je-n, “eat-passive”, *be-m “B-possessive”)
The interesting thing is what happens when another suffix is added. In verbs, adding another suffix can rescue the bad form, e.g., je-n-ir. In nouns, adding another suffix can’t make the result any more grammatical, e.g.. *be-m-u.
One account for these data is that verbs are not assessed cyclically, and nouns are. That is – ‘be-m-u’ must pass through ‘be-m’, which is ungrammatical. ‘je-n-ir’ does not require evaluation of ‘je-n’.
Both ‘je’ and ‘be’ are possible free-standing roots. They don’t differ with respect to their inability to inflect with a single -C. The only difference between them is their statuses as noun and verb.
Something I’m still working out – What sort of data would we need to show that these differences aren’t from other paradigms?
Did Bobaljik claim that syllabification is cross-linguistically cyclic in verbs and noncyclic in nouns? I thought he only meant that that is a possible situation, so we’d need the ability to refer to N and V in the constraints, but not to fix their rankings.
I’m not sure I understand the argument from Bobaljik’s section 4.2.1. He seems to be saying that it is worrying that verbs which have nouns homophonous and semantically related to their stems still act like other verbs. Is this really a concern? I would think that anything non-verbal would make a poor base for a verb — is that wrong? If not, wouldn’t we expect really odd things like the existence of non-semantically related homophonous nouns removing paradigmatic effects on verbs?
The systemic nature of the OP-faith constraints bothers me, but I also think it’s an interesting way of representing speakers’ knowledge that things are related. In terms of a connectionist network, we could think about words that are stored as whole words, but all related via a root (like different binyans of arabic verbs), having fairly strong connections to each other, or to some shared unit corresponding to the root, so that in production, you have to consider all the ‘paradigmatic neighbors’ of the word you’re trying to say as well as the word itself.
Likewise, we could even think about affixes (or non-root morphemes) as having their own paradigms. That is, all word units connected to that affix’s unit are considered together. There could be inhibitory connections between the different affix nodes, leading to forced differences between different members of the paradigm.
then I got to thinking about the ‘psychological reality’ behind faithfulness constraints, and what it means for a speaker to know that two words are related. (or at least, how we know that two words that are both stored share a root) Probably knowledge of relatedness could be represented in a connectionist network as shared connections
I’m curious about Boblajik’s claim that syllabification applies cyclically to verbs, but non-cyclically to nouns. Specifically, how would we make this work in a constraint-based theory? That is, where in the machinery would we put this stipulation that distinguishes between verbs and nouns?
My first thought is in the constraint set: we could establish two PARSE-SYLL constraints, one for verbs and one for nouns. In an OT-CC, we can delay syllabification of nouns until the end of the derivation by ranking PARSE-SYLL(N) so far down in the hierarchy that any other change will take precedence. Of course, this ranking would have to be universally fixed. The question is, universally fixed with respect to what? Universally fixed rankings are usually used to establish a relationship among specific constraints, not to give a single constraint an absolute ranking (i.e., “bottom of the hierarchy”). As an alternative, we could ensure that the PREC constraints penalize syllabifying nouns before any making any other change, but since PREC constraints are formalized based on FAITH violations, we’d have to assume that syllabification is somehow unfaithful, despite the fact that there’s no evidence whatsoever for contrastive syllabification.
Implementing this type of analysis in Stratal OT is even more daunting: because stratal OT allows stratum-internal parallelism, within any given stratum, there’s nothing to stop syllabification of nouns demanded by PARSE-SYLL(N) (unless you’re a big believer in *Struc). So the only way Stratal OT can guarantee that nouns don’t get syllabified until the final stratum is to have PARSE-SYLL(N) present only in the final stratum. But again, as far as I know, Stratal OT proposes the same set of constraints in each stratum, so this is another radical move.
So if we can’t put the stipulation that distinguishes nouns from verbs in the constraint set/ranking, can we put it in GEN? That seems even harder to me, since in most analyses I’m familiar with, GEN rules out representations, not the processes that create them. This being the case, to delay syllabification of nouns using GEN, we’d have to say that GEN can’t produce syllabified nouns. Of course, this is surface-false, so what we really want to say is that GEN can’t produce syllabified nouns until the final step/iteration/stratum of the derivation, and that removes a disturbing amount of restrictiveness from GEN.
Is there any hope for implementing Bobaljik’s claim in a constraint-based account?
This is another example of a V-N asymmetry of the sort discussed by Bobaljik, which I think supports his claim that many of these asymmetries can/should be accounted for by referring to hierarchal structure and not reference to other members of the paradigm.
In Turkish, the difference between verbs and nouns can (probably) only be explained as (1) the difference between the hierarchal structure of verbs/nouns or (2) the difference in the cyclic application in rules between verbs and nouns.
In Turkish, a gap results in the context CV-C. A CV root cannot inflect with a -C suffix. This is true for both CV roots that are verbs and CV roots that are nouns. (*je-n, “eat-passive”, *be-m “B-possessive”)
The interesting thing is what happens when another suffix is added. In verbs, adding another suffix can rescue the bad form, e.g., je-n-ir. In nouns, adding another suffix can’t make the result any more grammatical, e.g.. *be-m-u.
One account for these data is that verbs are not assessed cyclically, and nouns are. That is – ‘be-m-u’ must pass through ‘be-m’, which is ungrammatical. ‘je-n-ir’ does not require evaluation of ‘je-n’.
Both ‘je’ and ‘be’ are possible free-standing roots. They don’t differ with respect to their inability to inflect with a single -C. The only difference between them is their statuses as noun and verb.
Something I’m still working out – What sort of data would we need to show that these differences aren’t from other paradigms?
Did Bobaljik claim that syllabification is cross-linguistically cyclic in verbs and noncyclic in nouns? I thought he only meant that that is a possible situation, so we’d need the ability to refer to N and V in the constraints, but not to fix their rankings.
I’m not sure I understand the argument from Bobaljik’s section 4.2.1. He seems to be saying that it is worrying that verbs which have nouns homophonous and semantically related to their stems still act like other verbs. Is this really a concern? I would think that anything non-verbal would make a poor base for a verb — is that wrong? If not, wouldn’t we expect really odd things like the existence of non-semantically related homophonous nouns removing paradigmatic effects on verbs?
The systemic nature of the OP-faith constraints bothers me, but I also think it’s an interesting way of representing speakers’ knowledge that things are related. In terms of a connectionist network, we could think about words that are stored as whole words, but all related via a root (like different binyans of arabic verbs), having fairly strong connections to each other, or to some shared unit corresponding to the root, so that in production, you have to consider all the ‘paradigmatic neighbors’ of the word you’re trying to say as well as the word itself.
Likewise, we could even think about affixes (or non-root morphemes) as having their own paradigms. That is, all word units connected to that affix’s unit are considered together. There could be inhibitory connections between the different affix nodes, leading to forced differences between different members of the paradigm.
then I got to thinking about the ‘psychological reality’ behind faithfulness constraints, and what it means for a speaker to know that two words are related. (or at least, how we know that two words that are both stored share a root) Probably knowledge of relatedness could be represented in a connectionist network as shared connections