Author Archives: Joseph Pater

Serre in Cognitive Brown Bag Weds. Feb. 13 at noon

The next Cognitive brown bag speaker will be Thomas Serre of Brown University (http://serre-lab.clps.brown.edu/). The talk is on Wednesday 2/13, 12:00, Tobin 521B; title and abstract are below.

What are the computations underlying primate versus machine vision?

Primates excel at object recognition: For decades, the speed and accuracy of their visual system have remained unmatched by computer algorithms. But recent advances in Deep Convolutional Networks (DCNs) have led to vision systems that are starting to rival human decisions. A growing body of work also suggests that this recent surge in accuracy is accompanied by a concomitant improvement in our ability to account for neural data in higher areas of the primate visual cortex. Overall, DCNs have become de facto computational models of visual recognition.

In this talk, I will review recent work by our group which brings into relief limitations of modern DCNs as computational models of primate vision. I will show that DCNs are limited in their ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity and spatial relation judgments suggesting the need for additional neural computations beyond those implemented in current architectures. I will further demonstrate how neuroscience principles may help guide the future design for more robust computer vision architectures.

Perkins in Linguistics Fri. Feb. 15 at 3:30

Laurel Perkins of the University of Maryland (http://ling.umd.edu/~perkinsl/) will present “How to Grow a Grammar: Syntactic Development in 1-Year-Olds” on Friday Feb. 15th at 3:30 PM in N400. All are welcome – an abstract follows.

ABSTRACT: What we can learn depends on what we already know; a child who can’t count cannot learn arithmetic, and a child who can’t segment words cannot identify properties of verbs in her language. Language acquisition, like learning in general, is incremental. How do children draw the right generalizations about their language using incomplete and noisy representations of their linguistic input?

In this talk, I’ll examine some of the first steps of syntax acquisition in 1-year-old infants, using behavioral methods to probe their linguistic representations, and computational methods to ask how they learn from those representations. Taking argument structure as my case study, I will show: (1) that infants represent core clause arguments like “subject” and “object” when learning verbs, (2) that infants can cope with “non-basic” clause types, where those arguments have been displaced, by ignoring some of their input, and (3) that it is possible for infants to learn what kind of data to ignore, even before they can parse it. I will argue that the approach I take for studying this particular learning problem will generalize widely, allowing us to build new models for understanding the role of development in grammar learning.

Suhr in Machine Learning and Friends Thurs. Feb. 7 at 11:45

who: Alane Suhr
when: Feb 7 11:45am
where: Computer Science Building Rm 150
food: Athena’s Pizza

“Modeling and Learning Agents that Understand Language in Context”

Abstract: The meaning of a natural language utterance is influenced by the context in which it occurs, including interaction history and situated context. I will discuss two recent projects in context-dependent natural language understanding for building natural language interfaces to databases and following sequences of instructions. In the first part, I will introduce a model for mapping from natural language to executable SQL queries in an interaction. To resolve the meaning of later utterances, the system must consider the interaction history, including previous user utterances and previously-generated queries. We show how using both implicit and explicit mechanisms for making use of interaction history allows the system to effectively generate context-dependent representations. In the second part, I will describe an approach to map sequences of natural language instructions to system actions that modify an environment, focusing on learning without direct supervision on action sequences. We introduce an exploration-based learning approach that effectively learns to compose system actions to carry out user instructions in context of the environment and interaction.

 

Bio: Alane Suhr is a PhD student in the Computer Science department at Cornell University, focusing on building agents that understand natural language grounded in complex interactions. She is the recipient of an AI2 Key Scientific Challenges Award and a Microsoft Research Women’s Fellowship, and is a National Science Foundation Graduate Research Fellow. She has received paper awards at ACL 2017 and NAACL 2018. Alane received a Bachelor’s degree in Computer Science and Engineering from Ohio State University in 2016.

Momma in Linguistics Fri. Feb. 8 at 3:30

Shota Momma of UC San Diego (https://shotam.github.io) will present “Unifying parsing and generation” at 3:30, Friday February 8th in ILC N400. All are welcome!
Abstract: We use our grammatical knowledge in at least two ways. On one hand, we use our grammatical knowledge to say what we want to convey to others. On the other hand, we use our grammatical knowledge to understand what others say. In either case, we need to assemble sentence structures in a systematic fashion, in accordance with the grammar of our language. In this talk, I will advance the view that the same syntactic structure building mechanism is shared between comprehension and production, specifically focusing on sentences involving long-distance dependencies. I will argue that both comprehenders and speakers anticipatorily build (i.e., predict and plan) the gap structure, soon after they represent the filler and before representing the words and structures that intervene the filler and the gap. I will discuss the basic properties of the algorithm for establishing long-distance dependencies that I hypothesize to be shared between comprehension and production, and suggest that it resembles the derivational steps for establishing long-distance dependencies in an independently motivated grammatical formalism, known as Tree Adjoining Grammar.

Hopper in Cognitive bag lunch Weds. Feb. 6 at noon

Will Hopper (https://people.umass.edu/whopper/) will present “Comparing discrete and continuous evidence models of recognition memory response times”  on 2/6 at 12:00 in Tobin 521B (abstract below). All are welcome.

Memory theorists have long debated whether recognition decisions are mediated by considering the strength of a continuous memory strength signal, or by entering discrete evidence states. Historically, only models which utilized a continuous memory strength signal were able to account for both the distribution of response times and choice probabilities of recognition decisions. Recently, discrete state models have been extended to account for response times distributions, assuming the observed response times arise as a mixture of latent response time distributions associated with each discrete evidence state (Heck & Erdfelder, 2016, Starns, 2018). Here, we compare models from each class (the discrete-race model and the Ratcliff diffusion model), testing their ability to account for both speeded and unspeeded recognition decisions for items tested multiple times within a session. We conclude that the Ratcliff diffusion model provides a better account of the data, as the discrete-race model overestimates memory strength on unspeeded tests in order to describe the response times on speeded tests.

Breen in Cognitive Brown Bag Weds. Jan. 23 at noon

The first cognitive brown bag of the semester will be this Wednesday (1/23).  Our speaker will be Mara Breen of Mt. Holyoke College (https://www.mtholyoke.edu/~mbreen/); title and abstract are below.  As usual, talks are in Tobin 521B, 12:00-1:15.  All are welcome.

The remaining schedule for the semester is also provided below.

1/23  Mara Breen (Mt Holyoke)

Hierarchical linguistic metric structure in speaking, listening, and reading

In this talk, I will describe results from three experiments exploring how hierarchical timing regularities in language are realized by speakers, listeners, and readers. First, using a corpus of productions of Dr. Seuss’s The Cat in the Hat—a metrically and phonologically regular children’s book, we show that speakers’ word durations and intensities are accurately predicted by models of linguistic and musical meter, respectively, demonstrating that listeners to these texts receive consistent acoustic cues to hierarchical metric structure. In a second experiment, we recorded event-related potentials (ERPs) as participants listened to an isochronous, non-intensity-varying text-to-speech rendition of The Cat in the Hat. Pilot ERP results reveal electrophysiological indices of metric processing, demonstrating top-down realization of metric structure even in the absence of explicit prosodic cues. In a third experiment, we recorded ERPs while participants silently read metrically regular rhyming couplets where the final word sometimes mismatched the metric or prosodic context. These mismatches elicited ERP patterns similar to neurocognitive responses observed in listening experiments. In sum, these results demonstrate similarities in perceived and simulated hierarchical timing processes in listening and reading and help explain the processes by which listeners use predictable metric structure to facilitate speech segmentation and comprehension.

1/30 Andrea Cataldo

2/6  Will Hopper

2/13  Thomas Serre (Brown)

2/20 Ben Zobel

2/27  TBA

3/6  Jon Burnsky

3/13 SPRING BREAK

3/20  Mohit Iyyer (UMass CS)

3/27 Patrick Sadil

4/3  Junha Chang

4/10 Sandarsh Pandey

4/17 MONDAY SCHEDULE

4/24 Merika Wilson

5/1  First year projects

Roy in MLFL Thurs. 11/8 at 11:45

From Rajarshi Das on the UMass NLP list: “Subhro Roy is visiting us this week. He has done cool work in solving algebra problems via semantic parsing. He is currently working on grounded language stuff and common sense. Please sign up to meet with him here.”

who: Subhro Roy (MIT)

when: 11/08 (Thursday) 11:45a – 1:15p

where: Computer Science Building Rm 150

food: Athena’s Pizza

 “Towards Natural Human Robot Communication”

Abstract: Robots are becoming more and more popular with the rise of self driving cars, autonomous drones, and warehouse automation. However, they still require experts to set up the goals for the task, and are usually devoid of a high level understanding of its environment. Language can address these issues. Non expert users can seamlessly instruct robots using natural language commands. Linguistic resources can be used to extract knowledge about the world, which can be distilled into actionable intelligence. In this talk, I will describe some of our recent work in this direction. The first focuses on robust referring expression grounding, allowing users to describe commands involving objects in the environment. The second focuses on grounding high levelinstructions using background knowledge from WikiHow, Conceptnet and Wordnet. I will conclude by describing some of our ongoing work in acquiring commonsense knowledge for household robots.

Bio:

Subhro is a Postdoctoral Associate at the Computer Science and AI Laboratory (CSAIL) at MIT working with Prof. Nicholas Roy. His research focuses on grounding natural language instructions and commonsense knowledge acquisition; aimed towards capable service robots that interact seamlessly with humans. His research contributes towards programs funded by the US Army Research Labs and the Toyota Research Institute.
Subhro obtained his Ph.D. at the University of Illinois, Urbana Champaign, advised by Prof. Dan Roth. His doctoral research focused on models for automated numeric reasoning and word problem solving. His research led to the development of several top performing word problem solvers and the MAWPS system for standardizing datasets and evaluation in the area. His work has been published in TACL, EMNLP, NAACL, AAAI, CoRL and ISER. Subhro obtained his B. Tech. degree at the Indian Institute of Technology (IIT) Kharagpur.

Special talk by Brian Scholl Mon. 11/5 at noon

Brian Scholl (Yale; http://perception.yale.edu/Brian/) will present “Let’s See What Happens: Dynamic Events as Foundational Units of Perception and Cognition” next Monday (Nov 5) at noon in the CHC Event Hall East. A flyer is attached and the abstract is below.
Abstract. What is the purpose of perception?  Perhaps the most common answer to this question is that perception is a way of figuring out *what’s out there*, so as to better support adaptive interaction with our local environment.  Accordingly, the vast majority of work on visual processing involves representations such as features, objects, and scenes.  But the world consists of more than such static entities: out there, things *happen*.  And so I will suggest here that the underlying units of perception are often dynamic visual events.  In particular, in a series of studies that were largely inspired by developmental work, I will explore how visual event representations provide a foundation for much of our mental lives — including attention and memory, causal understanding, intuitive physics, and even social cognition.  This presentation will involve some results and some statistics, but the key claims will also be illustrated with phenomenologically vivid demonstrations in which you’ll be able to directly experience the importance of event perception — via phenomena such as transformational apparent motion, rhythmic engagement, change blindness in dynamic scenes, and the perception of chasing.  Collectively, this work presents a new way to think about how perception is attuned to an inherently dynamic world.
This event is co-sponsored by PBS, the Developmental Science Initiative, and the Initiative in Cognitive Science.

Brian Scholl lecture flyer Fall 2018.pdf

Lacreuse in cognitive bag lunch Weds. 10/31 at noon

The next brown bag speaker, on 10/31 at 12:00 in Tobin 521B, is UMass’ own Agnès Lacreuse (http://alacreuse.wixsite.com/lacreuselab).

Sex, hormones and cognitive aging in primates

Emerging clinical data suggest that men experience greater age-related decline that women, but little is known about the factors that drive these sex differences. Nonhuman primate models of human aging can help us answer some of these questions. I will describe several studies in nonhuman primates focusing on the effects of biological sex and sex hormones on neurocognitive aging. These studies are essential for the design of optimal therapies to alleviate age-related cognitive decline in humans. I will also argue that aging research across primate species has the potential to provide new cues for understanding healthy and pathological aging in humans.