Author Archives: Joseph Pater

Foley colloquium at 3:30 Friday March 1

Steven Foley (UCSC) will present “Why are ergatives hard to process? Reading-time evidence from Georgian” in ILC N400 at 3:30. All are welcome!

ABSTRACT: How easily a filler–gap dependency is processed can depend on the syntactic position of its gap: in many languages, for example, subject-gap relative clauses are generally easier to process than object-gap relatives (Kwon et al. 2013). One possible explanation for this is that certain syntactic positions might be intrinsically more accessible for extraction than others (Keenan & Comrie 1977). Alternatively, processing difficulty might correlate with the relative informativity of morphosyntactic cues (e.g., case) ambient to the gap (Polinsky et al. 2012; cf. Hale 2006). Ergative languages are ideal for disentangling these two theories, since they decouple case morphology (ergative ~ absolutive) and syntactic role (subject ~ object). This talk presents reading-time data from Georgian, a split-ergative language, which suggests that case may indeed be a crucial factor affecting real-time comprehension. Across four self-paced reading experiments, ergative DPs in different configurations are read consistently slower than absolutive ones — bearing out the predictions of the informativity hypothesis. However, the case is not closed: it seems that accusative morphology, at least in Japanese and Korean, does not seem to be associated with a processing cost, even though it is just as informative as ergative is. To reconcile this ergative–accusative processing asymmetry, I turn to the debate in formal syntax between different modalities of case assignment, and argue that a theory in which case is assigned by functional heads (Chomsky 2000, 2001) gives us better traction for understanding both Georgian-internal and crosslinguistic processing data than does a configurational theory of case (Marantz 1991).

Syrett colloquium Fri. Feb. 22 at 3:30

Kristen Syrett of Rutgers University (https://sites.rutgers.edu/kristen-syrett/) will present “Playing with semantic building blocks: Acquiring the lexical representations of verbs and adjectives” in ILC N400 Friday Feb. 22 at 3:30. All are welcome!
ABSTRACT: Early lexicons and initial child productions reflect a preponderance of object-denoting lexical items (nouns), while those that denote properties of objects or events (adjectives and verbs) lag behind. If nouns are the “Marsha” of the Brady Bunch, adjectives and verbs compete for the role of “Jan.” In many ways, this asymmetry privileging nouns makes sense: it’s much easier to track event participants than to track ephemeral events and the properties of those participants, which are much less stable, and both verbs and adjectives require nominal elements both syntactically and semantically. But the process of language acquisition is rapid, and within a matter of a few years, the child fairly quickly achieves proficiency, enough so to appreciate polysemy or word play. Given this state of affairs, we might ask two questions about the acquisition of these predicates: (1) What strategies or information sources do children recruit to pin down the lexical meaning of verbs and adjectives?, and (2) When they enter into the lexicon, how rich is children’s semantic knowledge of these words? In this talk, I provide one answer to (1), showcasing the role of the linguistic context. I then highlight a set of examples in response to (2), illustrating children’s early command of selectional restrictions for both categories. In doing so, I also demonstrate that once these words are established as part of the children’s receptive and productive vocabulary, there are certain advantages afforded to the language learner—although here, we uncover an asymmetry between verbs and adjectives implicating other aspects of the grammar and the context. Together, what this body of work reveals is the complex, interrelated process of acquiring and assembling the semantic building blocks of language.

Perkins colloquium Fri. Feb. 15 at 3:30

Laurel Perkins of the University of Maryland (http://ling.umd.edu/~perkinsl/) will present “How to Grow a Grammar: Syntactic Development in 1-Year-Olds” on Friday Feb. 15th at 3:30 PM in N400. All are welcome – an abstract follows.
ABSTRACT: What we can learn depends on what we already know; a child who can’t count cannot learn arithmetic, and a child who can’t segment words cannot identify properties of verbs in her language. Language acquisition, like learning in general, is incremental. How do children draw the right generalizations about their language using incomplete and noisy representations of their linguistic input?
In this talk, I’ll examine some of the first steps of syntax acquisition in 1-year-old infants, using behavioral methods to probe their linguistic representations, and computational methods to ask how they learn from those representations. Taking argument structure as my case study, I will show: (1) that infants represent core clause arguments like “subject” and “object” when learning verbs, (2) that infants can cope with “non-basic” clause types, where those arguments have been displaced, by ignoring some of their input, and (3) that it is possible for infants to learn what kind of data to ignore, even before they can parse it. I will argue that the approach I take for studying this particular learning problem will generalize widely, allowing us to build new models for understanding the role of development in grammar learning.

“Recursion across Domains” published by CUP

A book edited by Luiz Amaral, Marcus Maia, Andrew Nevins, and Tom Roeper on “Recursion across Domains” was recently published by Cambridge University Press. As Tom Roeper notes:

This book has a large UMass footprint — editors: Luiz Amaral, Tom Roeper —  contributors include many former students, faculty and visitors: Suzi Lima, Bart Hollebrandse, Ana Perez, Uli Sauerland, Yohei Oseki, Terue Nakato, Rafael Nonato, Luiz Amaral, Tom Roeper

Summary: Recursion and self-embedding are at the heart of our ability to formulate our thoughts, articulate our imagination and share with other human beings. Nonetheless, controversy exists over the extent to which recursion is shared across all domains of syntax. A collection of 18 studies are presented here on the central linguistic property of recursion, examining a range of constructions in over a dozen languages representing great areal, typological and genetic diversity and spanning wide latitudes. The volume expands the topic to include prepositional phrases, possessives, adjectives, and relative clauses – our many vehicles to express creative thought – to provide a critical perspective on claims about how recursion connects to broader aspects of the mind. Parallel explorations across language families, literate and non-literate societies, children and adults are investigated and constitutes a new step in the generative tradition by simultaneously focusing on formal theory, acquisition and experimentation, and ecologically-sensitive fieldwork, and initiates a new community where these diverse experts collaborate

Table of Contents:

Foreword (Ian Roberts)

A Map of the Theoretical and Empirical Issues (Amaral, Maia, Roeper, & Nevins)

Speech Reports, Theory of Mind and Evidentials

  1. Sauerland, Uli. False speech reports in Piraha ?: A comprehension experi- ment
  2. Hollebrandse, Bart. Indirect recursion: the importance of second-order embedding and its implications for cross-linguistic research
  3. Correa, Let?cia M.S., Marina R. A. Augusto, Mercedes Marcilese & Clara Villarinho. Recursion in language and the development of higher order cognitive functions: an investigation with children acquiring Brazilian Portuguese
  4. Stenzel, Kristine. Embedding as a building block of evidential categories in Kotiria
  5. Thomas, Guillaume. Embedded imperatives in Mbya ?

Recursion along the Clausal Spine

  1. Rodrigues, Cilene, Raiane Salles, & Filomena Sandalo. Word order in control: evidence for self-embedding in Piraha ?
  2. Nonato, Rafael. Switch-reference is licensed by both kinds of coordina- tion: novel K?iseˆdjeˆ data
  3. Duarte, Fabio. Clausal recursion, predicate raising and head-finality in Teneteha ?ra
  4. Vieira,Marcia.Recursion in Tupi-Guaranilanguages:TheCasesofTupinamba ? and Guaran ??

Recursive Possession and Relative Clauses

  1. Terunuma, Akikio  & TerueNakato.Recursive possessives in ChildJapanese
  2. Lima, Suzi, & Pikuruk Kaiabi. Recursion of possessives and locative phrases in Kawaiwete
  3. Amaral, Luiz. & Wendy Leandro. Relative Clauses in Wapichana and the interpretation of multiple embedded “uraz” Constructions
  4. Storto, Luciana, Karin Vivanco, & Ivan Rocha. Multiple embedding of relative clauses in Karitiana

Recursion in the PP Domain

  1. Roeper,Tom & YoheiOseki.Directstructuredrecursionintheacquisition path from flat to hierarchical structure
  2. Sandalo, Filomena, Cilene Rodrigues, Tom Roeper, Luiz Amaral, Marcus Maia & Glauber Romling. Self-embedded recursive postpositional phrases in Piraha ?: a pilot study
  3. Perez-Leroux, Ana T., Anny Castilla-Earls, Susana Bejar, Diane Massam & Tyler Peterson. Strong continuity and children’s development of DP recursion
  4. Franchetto, Bruna. Prosody and recursion in Kuikuro: DPs vs PPs
  5. Maia,Marcus,Anieli Franca, AlineGesualdi, AleriaLage, Cristiane Oliveira, Marije Soto & Juliana Gomes. The processing of PP embedding and co- ordination in Karaja ? and in Portuguese

 

 

Momma colloquium Friday Feb. 8 at 3:30

Shota Momma of UC San Diego (https://shotam.github.io) will present “Unifying parsing and generation” at 3:30, Friday February 8th in ILC N400. All are welcome!
Abstract: We use our grammatical knowledge in at least two ways. On one hand, we use our grammatical knowledge to say what we want to convey to others. On the other hand, we use our grammatical knowledge to understand what others say. In either case, we need to assemble sentence structures in a systematic fashion, in accordance with the grammar of our language. In this talk, I will advance the view that the same syntactic structure building mechanism is shared between comprehension and production, specifically focusing on sentences involving long-distance dependencies. I will argue that both comprehenders and speakers anticipatorily build (i.e., predict and plan) the gap structure, soon after they represent the filler and before representing the words and structures that intervene the filler and the gap. I will discuss the basic properties of the algorithm for establishing long-distance dependencies that I hypothesize to be shared between comprehension and production, and suggest that it resembles the derivational steps for establishing long-distance dependencies in an independently motivated grammatical formalism, known as Tree Adjoining Grammar.

LAWNE held Dec. 1 at UMass

LAWNE (Language Acquisitiion Workshop Northeast) held at UMass Dec. 1 brought together students and faculty from UMass, UConn and MIT where papers on ellipsis, null subjects, presuppositional too, recursion, math in language, and passives with methods from comprehension experimentation, naturalistic data, second language acquisition were all presented. A few of their authors gathered for a picture afterwards, shown below.

 

Magnuson CogSci talk at noon Wednesday in ILC N400

James Magnuson (https://magnuson.psy.uconn.edu/) will present a talk sponsored by the Five College Cognitive Science Speaker Series in ILC N400 from at noon Wednesday 27th. Pizza will be served. The title and abstract are below. All are welcome!

EARSHOT: A minimal neural network model of human speech recognition that learns to map real speech to semantic patterns

James S. Magnuson, Heejo You, Hosung Nam, Paul Allopenna, Kevin Brown, Monty Escabi, Rachel Theodore, Sahil Luthra, Monica Li, & Jay Rueckl

One of the great unsolved challenges in the cognitive and neural sciences is understanding how human listeners achieve phonetic constancy (seemingly effortless perception of a speaker’s intended consonants and vowels under typical conditions) despite a lack of invariant cues to speech sounds. Models (mathematical, neural network, or Bayesian) of human speech recognition have been essential tools in the development of theories over the last forty years. However, they have been little help in understanding phonetic constancy because most do not operate on real speech (they instead focus on mapping from a sequence of consonants and vowels to words in memory), and most do not learn. The few models that work on real speech borrow elements from automatic speech recognition (ASR), but do not achieve high accuracy and are arguably too complex to provide much theoretical insight. Over the last two decades, however, advances in deep learning have revolutionized ASR, with neural network approaches that emerged from the same framework as those used in cognitive models. These models do not offer much guidance for human speech recognition because of their complexity. Our team asked whether we could borrow minimal elements from ASR to construct a simple cognitive model that would work on real speech. The result is EARSHOT (Emulation of Auditory Recognition of Speech by Humans Over Time), a neural network trained on 1000 words produced by 10 talkers. It learns to map spectral slice inputs to sparse “pseudo-semantic” vectors via recurrent hidden units. The element we have borrowed from ASR is to use “long short-term memory” (LSTM) nodes. LSTM nodes have a memory cell and internal “gates” that allow nodes to become differentially sensitive to variable time scales. EARSHOT achieves high accuracy and moderate generalization, and exhibits human-like over-time phonological competition. Analyses of hidden units – based on approaches used in human electrocorticography – reveal that the model learns a distributed phonological code to map speech to semantics that resembles responses to speech observed in human superior temporal gyrus. I will discuss the implications for cognitive and neural theories of human speech learning and processing.