Yearly Archives: 2016

Charnavel in Linguistics Friday Dec. 2 at 3:30

Isabelle Charnavel of Harvard University will present a colloquium in the Department of Linguistics in ILC 400, on Friday Dec. 2 at 3:30. All are welcome!

Title:  Logophoricity in Adjunct Clauses

Abstract:

It has long been observed that perspective sensitive elements such as logophoric pronouns or long distance reflexives can occur in some circumstantial subordinate clauses. The goal of the talk is to explore the consequences of this observation for the syntax and semantics of adjunct clauses. Based on the case study of causal clauses in English (and French), I argue that such adjuncts are intrinsically

perspectival: an adjunct conjunction (e.g. because/since) expresses a relation (e.g. cause) between an argument A (roughly, the main clause) and its complement B, which is established by a judge. The identity of the judge, which can be diagnosed by the use of elements oriented towards the speaker or the event participant in the adjunct clause, depends on the nature of A (namely event/state, proposition or speech act).

(1)  Liz has a fever because she has malaria. (cause of state according to Liz or speaker)

(2)  Liz has malaria since she has a fever. (reason for truth of proposition or cause of assertion according to speaker)

Moreover, the scope and judge possibilities of causal clauses increase in the presence of attitude clauses: an attitude holder can be the causal judge for an embedded state/event or an embedded proposition.

Ultimately, the correlations between the scopal and the perspectival possibilities of causal clauses show that the perspective parameter (the

judge) can be analyzed as a silent argument of the conjunction that must be bound within its clause.

Rooshenas in Machine Learning and Friends Wednesday Nov. 30 at 11:30

who: Pedram Rooshenas, University of Oregon
when: 11:30am, Wednesday, Nov 30
where: Computer Science Building rm150
food: wraps from The Works

Learning Tractable Graphical Models

Abstract:
Probabilistic graphical models have been successfully applied to a wide variety of fields such as computational biology, computer vision, natural language processing, robotics, and many more. However, in probabilistic models for many real-world domains, exact inference is intractable, and approximate inference may be inaccurate.  In this talk, we discuss how we can learn tractable models such as arithmetic circuits (ACs) and sum-product networks (SPNs), in which marginal and conditional queries can be answered efficiently.

We also discuss how we can learn these tractable graphical models in a discriminative setting, in particular through introducing Generalized ACs, which combines ACs and neural networks.

Bio:
Pedram Rooshenas is a Ph.D. candidate at the University of Oregon working with Prof. Daniel Lowd. Pedram’s research interests include learning and inference in graphical models and deep structured models.

Pedram has an MSc. degree in Information Technology, with a thesis on data reduction in wireless sensor networks, from Sharif University, Tehran and an MSc. degree in Computer Science from the University of Oregon.

Pedram also maintains Libra, an open-source toolkit for learning and inference with discrete probabilistic models.

Rysling in Cognitive Brown Bag Weds. Nov. 30

Amanda Rysling (UMass Linguistics) will be presenting in this week’s cognitive brown bag – all are welcome!

Title: Preferential early attribution in incremental segmental parsing

Time: 12:00pm to 1:15pm Wednesday Nov. 30. Location:  Tobin 521B.

Abstract: Recognizing the speech we hear as the sounds of the languages we speak requires solving a parsing problem: mapping from the acoustic input we receive to the phonemes and words we recognize as our language. The literature on segmental perception has focused on cases in which we as listeners seem to successfully un-do the blending of speech sounds that naturally occurs in production. The field has identified many cases in which listeners seem to completely attribute the acoustic products of articulation to the sounds whose articulation created them, and so seem to solve the parsing problem in an efficient and seldom-errorful way. Only a handful of studies have examined cases in which listeners seem to systematically “mis-parse,” attributing the acoustic products of one sound’s articulation to another sound, and failing to disentangle the blend of their production. In this talk, I review the results of six phoneme categorization studies that demonstrate that such failure to completely un-do acoustic blending arises when listeners must judge one sound in a string relative to the sound that follows it, and the acoustic transitions between the two sounds are gradual. I then report the results of studies that demonstrate that listeners persist in attributing the acoustic products of a second sound’s articulation to a first sound even when the signal conveys early explicit evidence about the identity of that second sound, and so could have been leveraged to begin disentangling the first from the second before the second sound was fully realized. I go on to argue for a shift in our perspective toward segmental parsing. Attributing the product of a later sound’s articulation to an earlier sound seems inefficient or undesirable when we understand the goal of segmental parsing to be the complete attribution of acoustic products to exactly the sounds whose articulations gave rise to them. But when we consider the fact that listeners necessarily perceive the evidence for events in the world at a delay from when those events occurred, it is adaptive to prefer attributing later-incoming acoustic signal to earlier speech sounds.

Xue in MLFL Thursday Nov. 3 at noon

Tianfan Xue will present “Visual Dynamics Probabilistic Future Frame Synthesis Via Cross Convolutional Networks” in the Machine Learning and Friends Lunch this Thursday at noon. Details, abstract and short bio are below.

What is it? A gathering of students/faculty/staff with broad interest in the methods and applications of machine learning.
When is it? Thursdays 12:00pm to 1:00pm, unless otherwise noted. Arrive at 11:45 to get pizza.
Where is it? CS150
Who is invited? Everyone is welcome.
Is there food? Yes! Pizza is provided.

Visual Dynamics Probabilistic Future Frame Synthesis Via Cross Convolutional Networks

Abstract:
We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video frames. Please refer to our website for more details: http://visualdynamics.csail.mit.edu/.

Short bio:
Tianfan Xue is currently a fifth-year Ph.D. student in MIT CSAIL, working with William T. Freeman. Before that, he received his B.E. degree from Tsinghua Universtiy, and M.Phil. degree from The Chinese University of Hong Kong. His research interests include computer vision, image processing, and machine learning. Specifically, he is interested in motion estimation and image and video processing based on the motion information.

Gow in Linguistics 3:30 Fri. Nov. 4

David Gow (Cognitive/Behavioral Neurology Group, Massachusetts General
Hospital), will present “Inference, phonology and the brain: What Granger analysis can tell us about the sources of phonological structure” Friday November 4, 2106, at 3:30 PM, in ILC N400. An abstract follows.

Abstract: Speech perception reflects the lawful phonological patterning
of language. This has been explained reasonably well through vastly
different approaches involving generative phonological rules and
constraints, statistical inference, and interactive associative mapping
processes. This three-way distinction can be distilled to different
accounts of the functional architecture of language processing.
Unfortunately, claims about functional architecture (e.g. modularity
versus interactivity) have proven notoriously hard to falsify using
traditional behavioral and BOLD imaging techniques. In this talk I will
introduce the use of Kalman-filter enabled Granger causation analysis of
MR-constrained MEG/EEG data as a powerful new tool to discover
functional architecture. By identifying the pattern of directed
influences between functionally interpretable brain regions during task
performance, this technique provides an entirely data-driven approach
for discovering functional architecture. Using this approach, I will
present data that challenges both statistical and rule/constraint
accounts of phonotactic influences on perception, and suggests that
phonotactic phenomena are the result of top-down lexical influences on
speech perception.

Potter in Cognitive Brown Bag Weds. Nov. 2 at noon

Time: 12:00pm to 1:15pm Wednesday Nov. 2. Location:  Tobin 521B. All are welcome!

Kevin Potter (University of Massachusetts)

Title:
Testing a perceptual fluency/disfluency model of priming with a model of response time and choice

Abstract:
With immediate repetition priming of forced choice perceptual identification, short prime durations produce positive priming (i.e., higher accuracy when the target is primed, but lower accuracy when the foil is primed). In contrast, long prime durations reverse this pattern. The dynamic time course of this transition from positive to negative priming is well explained by the nROUSE model of Huber and O’Reilly (2003), which includes neural habituation. This model assumes that the speed of perceptual identification is used to decide which choice word was seen most recently as the briefly flashed target. Thus, short duration primes induce faster identification (perceptual fluency) for the primed choice and a bias for the primed alternative whereas long duration primes induce slower identification (perceptual disfluency) for the primed choice and a bias against the primed alternative. This account makes specific predictions regarding perceptual identification latencies, and yet a test of these predictions is difficult with forced choice testing, which reflects a comparison decision process. To address this limitation, we collected forced-choice and single-item same-different responses in the same priming paradigm. We then applied a diffusion-race model to the data, transforming the response time and choice data into ‘observed’ drift rate parameters (i.e., the rate of evidence accumulation). Remarkably, the drift rates were inversely proportional to the identification latencies of the nROUSE model even though each model was applied independently to the data and even though the nROUSE model was only applied to the accuracy data. This convergence of the models confirms key predictions of the nROUSE model regarding perceptual fluency and disfluency.

Jackendoff in Linguistics 3:30 Fri. Oct. 28

Ray Jackendoff will present a colloquium in the Department of Linguistics this Friday (October 28). Place: N400. Time: 3:30 PM.

Ray Jackendoff (Tufts University) and Jenny Audring (University of Leiden): Morphology in the Mental Lexicon.

We explore a theory of morphology grounded in the outlook of the Parallel Architecture (PA, Jackendoff 2002), drawing in large part on Construction Morphology (Booij 2010). The fundamental goal is to describe what a speaker stores and in what form, and to describe how this knowledge is put to use in constructing novel utterances. A basic tenet of PA is that linguistic structure is built out of independent phonological, syntactic, and semantic/conceptual structures, plus explicit interfaces that relate the three structures, often in many-to-many fashion.

Within this outlook, morphology emerges as the grammar of word-sized pieces of structure and their constituents, comprising morphosyntax and its interfaces to word phonology, lexical semantics, and phrasal syntax. Canonical morphology features a straightforward mapping among these components; irregular morphology is predominantly a matter of noncanonical mapping between constituents of morphosyntax and phonology.

As in Construction Grammar and Construction Morphology, PA encodes rules of grammar as schemas: pieces of linguistic structure that contain variables, but which are otherwise in the same format as words – in other words, the grammar is part of the lexicon. Novel utterances are constructed by instantiating variables in schemas through Unification. A compatible morphological theory must likewise state morphological patterns in terms of declarative schemas rather than procedural or realizational rules.

Non-productive morphological patterns can be described in terms of schemas that are formally parallel to those for productive patterns. However, they do not encode affordances for building new structures online; rather, they motivate relations among items stored in the lexicon. Productive schemas can be used in this way as well, in addition to their standard use in building novel structures; hence they can be thought of as schemas that have “gone viral.” Interestingly, this classification proves useful also for extending syntactic schemas to idioms and other fixed expressions.

This raises the question of how lexical relations are to be expressed. Beginning with the well-known mechanism of inheritance, we show that inheritance should be cashed out, not in terms of minimizing the number of symbols in the lexicon, but in terms of increased redundancy (or lower entropy). We propose a generalization of inheritance to include lexical relations that are nondirectional and symmetrical, and we develop a notation that pinpoints the regions of commonality between pairs of words, between words and schemas, and between pairs of schemas.

We conclude that linguistic theory should be concerned with relations among lexical items, from productive to marginal, at least as much as with the online construction of novel forms. We further conclude that the lexicon is richly textured, in a fashion that invites comparison with other domains of human knowledge.

Boroditsky at Smith College Thurs. Oct. 27 at 4:30

Lera Boroditsky will present “How the languages we speak shape the ways we think” on Thursday, October 27, 4:30 PM, McConnell Foyer 103, Smith College.

Lera Boroditsky is an Associate Professor of Cognitive Science at UCSD and Editor in Chief of Frontiers in Cultural Psychology. She previously served on the faculty at MIT and at Stanford. Her research is on the relationships between mind, world, and language (or how humans get so smart). She has been named one of 25 Visionaries changing the world by the Utne Reader, and is also a Searle Scholar, a McDonnell scholar, recipient of an NSF Career award, and an APA Distinguished Scientist lecturer.

Criss in Cognitive Brown Bag Weds. Oct. 26th at noon

Time: 12:00pm to 1:15pm Wednesday Oct. 26. Location:  Tobin 521B.

Amy Criss (Syracuse University)

http://memolab.syr.edu/

Title: Memory across Tasks, Items, and Individuals

Memory researchers have spent the past several decades drilling down into memory. For example, many empirical investigations and theoretical developments focus on a single task and/or a single effect. We propose that the field is well-positioned to benefit from zooming back out. Here we present one attempt to do so. 462 participants completed 5 memory tasks with a fixed set of words that varied on many dimensions. We extract factors that are shared across memory tasks and we identify properties of items that support memory.  Finally, we sketch the beginnings of a conceptual model based on these findings.

Ng in CSSI, Weds. Oct. 19 at 11 am

In lieu of the Friday Computational Social Science Institute seminar this week there is a collaborative talk with the UMass Information Technology Policy seminar series. The talk will take place in CS140, Wednesday October 19  11AM.

Jason Q. Ng, Data Analyst Tumblr; Research Fellow, Citizen Lab
A data-driven approach to researching censorship and sensitive conversations on social media 

Like all nations, China has been profoundly affected by the emergence of the Internet, particularly new forms of social media which allow individuals themselves to be independent broadcasters of news. However the rise of “We Media” has also led to a corresponding rise in the filtering and blocking of online content in China. Identifying and explaining these disruptions comes with a host of challenges for researchers–ranging from technical ones like developing methodologies for tracking online censorship to non-technical ones like even defining what online censorship is. 

In this talk, we’ll look at a number of different ways online censorship can be defined as well as various data-driven techniques for revealing its occurrence in social media–as well as the various ways social media companies attempt to mask or justify it. However, just as important as identifying the mechanisms for how censorship is implemented is trying to understand the motivations for such behavior. Knowing both how and why online censorship occurs is key for not only academic researchers who hope to better understand content moderation and filtering practices, but it is essential information for the activists, journalists, and advocates who utilize such findings in their work.

About the speaker: Jason Q. Ng is currently a Research Fellow at the University of Toronto’s Citizen Lab, Data Analyst at Tumblr, and author of Blocked on Weibo, a book on Chinese social media. He is also a research consultant at China Digital Times where he develops censorship monitoring tools and teaches a digital activism course at Columbia SIPA. His writing and research projects can be found at www.jasonqng.com.