Author Archives: Joseph Pater

Pavlick in MLFL Weds. Oct. 24 at 11:45

who: Ellie Pavlick (Brown University)

when: 10/24 (Wednesday) 11:45a – 1:15p

where: Computer Science Building Rm 150

food: Athena’s Pizza

Why should we care about linguistics?

Abstract: In just the past few months, a flurry of adversarial studies have pushed back on the apparent progress of neural networks, with multiple analyses suggesting that deep models of text fail to capture even basic properties of language, such as negation, word order, and compositionality. Alongside this wave of negative results, our field has stated ambitions to move beyond task-specific models and toward “general purpose” word, sentence, and even document embeddings. This is a tall order for the field of NLP, and, I argue, marks a significant shift in the way we approach our research. I will discuss what we can learn from the field of linguistics about the challenges of codifying all of language in a “general purpose” way. Then, more importantly, I will discuss what we cannot learn from linguistics. I will argue that the state-of-the-art of NLP research is operating close to the limits of what we know about natural language semantics, both within our field and outside it. I will conclude with thoughts on why this opens opportunities for NLP to advance both technology and basic science as it relates to language, and the implications for the way we should conduct empirical research.

Bio: Ellie Pavlick is an Assistant Professor of Computer Science and Brown University and a Research Scientist at Google AI. Ellie received her PhD from University of Pennsylvania under the supervision of Chris Callison-Burch. Her current research focus is on semantics, pragmatics, and building cognitively-plausible computational models of natural language inference.

Ling in Cognitive Brown Bag Oct. 24 at noon

The Cognitive Brown Bag speaker this Wednesday will be Sam Ling of Boston University (https://www.bu.edu/psych/profile/sam-ling-phd-2/) on  “How does normalization regulate visual competition?” (abstract below). As usual, the talk is 12:00-1:15, Tobin 521B.

Abstract. How does the visual system regulate competing sensory information? Recent theories propose that a computation known as divisive normalization plays a key role in governing neural competition. Normalization is considered a canonical neural computation, potentially driving responses throughout the neural and cognitive system. Interestingly, there is evidence to suggest that normalization’s pervasive role relies on an exquisite tuning to stimulus features, such as orientation, but this feature-selective nature of normalization is surprisingly understudied, particularly in humans. In this talk, I will describe a series of studies using functional neuroimaging and psychophysics to shed light on the tuning characteristics that allow normalization to control population responses within human visual cortex, and to understand how this form of normalization can support functions as diverse as attentional selection and working memory.

 

Elhadad in MLFL Thurs. Oct. 18 at 11:45

who: Noémie Elhadad, Columbia University
when: October 18, 11:45 A.M. – 1:00 P.M.
where: Computer Science Building, Room 150/151
food: Athena’s Pizza

Phenotyping Endometriosis Through Mixed Membership Models Of Self-Tracking Data

Abstract: Despite the impressive past and recent advances in medical sciences, there are still a host of chronic conditions which are not well understood and lack even consensus description of their signs and symptoms. Without such consensus, research for precise treatments and ultimately a cure is at a halt. Phenotyping these conditions, that is, systematically characterizing the signs, symptoms and other aspects of these conditions, is thus particularly needed. Computational phenotyping can help identify cohorts of patients at scale and identify potential sub-groups, thus generating new hypotheses for these mysterious conditions. While traditional phenotyping algorithms rely on clinical documentation and expert knowledge, phenotyping for enigmatic conditions might benefit from patient expertise as well. In this talk I will focus on one such enigmatic condition, endometriosis, a chronic condition estimated to affect 10% of women in reproductive age. I will describe approaches needed to phenotype the condition: eliciting dimensions of disease, engaging patients in self-tracking their condition, and discovering phenotypes and sub-phenotypes of endometriosis based on patients’ accounts of the disease.

Bio: Noemie Elhadad is an Associate Professor in Biomedical Informatics, affiliated with Computer Science and the Data Science Institute at Columbia University. Her research is at the intersection of computation, technology, and medicine with a focus on machine learning for healthcare and natural language processing of clinical and health texts. Her work is funded by the National Science Foundation, the National Library of Medicine, the National Cancer Institute, and the National Institute for General Medical Sciences.

More at http://people.dbmi.columbia.edu/noemie/

Zaki in Cognitive Brown Bag Weds. Oct. 17 at noon

The cognitive brown bag speaker this week will be Safa Zaki, of Williams College (https://sites.williams.edu/szaki/) on “Sequence Effects in Category Learning”. The abstract is below.  As usual, the talk will be on Wednesday, 12:00-1:15, Tobin 521B.

 

Abstract. Sequence effects have recently been reported in the category learning literature in which the particular order of presentation of exemplars in a category affects the speed of learning. I will present several experiments in which we that test the idea that some of these effects are caused by changes in attention allocation that result from comparisons between temporally juxtaposed exemplars. I will discuss eyetracking data and model fits that provide converging evidence of increased attention to the target dimension as a result of the juxtaposition of items in the list.

 

Discussion: Generative linguistics and neural networks at 60

From Joe Pater

The commentaries on my paper “Generative Linguistics and Neural Networks at 60: Foundation, Friction and Fusion” are all now posted on-line at the authors’ websites at the links below. The linked version of my paper and – I presume – of the commentaries are the non-copyedited but otherwise final versions that will appear in the March 2019 volume of Language in the Perspectives section.

Update March 2019: The final published versions can now be found at this link.

I decided not to write a reply to the commentaries, since they nicely illustrate a range of possible responses to the target article, and because most of what I would have written in a reply would have been to repeat or elaborate on points that are already in my paper. But there is of course lots more to talk about, so I thought I’d set up this blog post with open comments to allow further relatively well-archived discussion to continue.

Iris Berent and Gary Marcus. No integration without structured representations: reply to Pater.

Ewan Dunbar. Generative grammar, neural networks, and the implementational mapping problem.

Tal Linzen. What can linguistics and deep learning contribute to each other?

Lisa Pearl. Fusion is great, and interpretable fusion could be exciting for theory generation.

Chris Potts. A case for deep learning in semantics

Jonathan Rawski and Jeff Heinz. No Free Lunch in Linguistics or Machine Learning.

Jacobs in Cognitive Brown Bag Weds. Oct. 10

The cognitive brown bag speaker this week will be Cassandra Jacobs, of UC Davis (https://cljacobs.net/) on “What memory for phrases can tell us about memory and phrases” (abstract below).  As usual, the talk will be on Wednesday, 12:00-1:15, Tobin 521B. All are welcome.

Abstract. Language is full of regularities and formulaic language. We reuse familiar words and phrases and combine them in novel ways to express new ideas. Most research has focused on words, but recent psycholinguistic research suggests that we also represent phrases like “psychic nephew” and “alcoholic beverages” in long-term memory, typically arguing that frequent phrases are easier to process because they are represented and retrieved from memory as unanalyzed wholes, effectively just like words (e.g. Jansen & Barber, 2012; Arnon & Cohen Priva, 2013; Goldberg, 2003). In my research, I have questioned whether this is true by asking how phrases are represented in episodic memory. In two recognition memory and free recall experiments, I will show that phrases are fundamentally composed of words, and are not represented as unanalyzed, word-like units in long-term memory. I propose that simple mechanisms can be used to explain phrase frequency effects without needing to posit the existence of phrases per se. I will then describe a verbal model of memory that can explain phrase frequency effects in recognition and free recall.

 

UMass Week of Memory and Forgetting Oct 29 – Nov 2, 2018

From https://websites.umass.edu/ions/event/umass-week-of-memory-and-forgetting/

This is a week of events relating memory and forgetting from a variety of perspectives. It shows how memory is both an individual and a societal feature.

The Week of Memory and Forgetting is a collaboration between The Initiative on Neurosciences (IONs), the Fine Art Center (FAC), the Institute for Holocaust, Genocide, and Memory Studies (IHGMS), and faculty in the Department of Spanish and Portuguese Studies.

UMass Linguistics hiring in Psycholinguistics

Please share this job posting widely! Note that the Department of Linguistics hopes to find someone who:

…can engage with the development of the Cognitive Science Institute, a broad, interdisciplinary group focused on fostering Cognitive Science research across the UMass Community

Also, please note that  applicants at both the Assistant and Associate Professor levels are welcome to apply, and that the Department of Linguistics is fully committed to the University’s diversity goals summarized in the last paragraph of the posting.

https://careers.insidehighered.com/job/1614433/associate-professor-of-linguistics/