Author Archives: Joseph Pater

Munkhdalai in Machine Learning and Friends Tues. Feb. 14 at noon

who: Tsendsuren Munkhdalai, UMass CICS
when: noon, Tuesday, February 14
where: Computer Science Building rm150
food: Antonio’s pizza

Abstract:
This talk will first briefly review recent advances in memory augmented neural nets and then present our own contribution, Neural Semantic Encoders (NSE) [1,2]. With a special focus on NSE, we show that external memory in conjunction with attention mechanism can be a good asset in natural language understanding and reasoning. Particularly we will cover a set of real and large-scale NLP tasks ranging from sentence classification to seq-seq learning and question answering, and demonstrate how NSE is effectively applied to them.

Bio:
Tsendsuren Munkhdalai is a postdoctoral associate at Prof. Hong’s BioNLP group at Umass medical school. He recently received his PhD in biomedical information extraction and NLP from the Department of Computer Science at Chungbuk National University, South Korea under the excellent supervision of Prof. Keun Ho Ryu. His research interest includes semi-supervised learning, representation learning, meta learning and deep learning with applications to natural language understanding and (clinical/biomedical) information extraction.

Fitter in Machine Learning and Friends, noon, Wednesday, February 15

Please note:  This is the second MLFL scheduled this week and it is on Wednesday

who: Naomi Fitter, University of Pennsylvania 
when: noon, Wednesday, February 15
where: Computer Science Building rm150
food: Antonio’s pizza

Exploring Human-Inspired Haptic Interaction Skills For Robots
Abstract:  A human-inspired understanding of the world can enhance robots’ abilities to successfully and safely explore the world around them in applications from successfully manipulating delicate objects to playfully high-fiving a human teammate. Particularly in situations where human skills outweigh modern robot capabilities, data collected from people can yield models for successful robot behaviors. In this talk, I will cover my past cognitive robotics work on helping the PR2 robot to explore and label objects with haptic adjectives and my more recent work on allowing the Baxter robot to label and reciprocate human motions.
Bio:  Naomi Fitter is a PhD Candidate and member of the Haptics Group in the University of Pennsylvania GRASP Lab, working with Professor Katherine Kuchenbecker. She investigates socially relevant physical human-robot interactions like human-robot high fives and hand-clapping games. Her work involves a combination of haptics, socially assistive robotics, entertaining robotics, and physical human-robot interaction.

John Rickford to deliver Freeman lecture Friday Feb. 17 at 2:30

“Justice for Jeantel (and Trayvon): Fighting Dialect Prejudice in Courtrooms and Beyond.”

Professor John Rickford of Stanford University will deliver this year’s annual Freeman Lecture in Linguistics, in ILC, N151 Friday Feb 17, 2017 at 2:30 pm

rickfordProfessor Rickford is a world-renowned expert in the structure, history, and dialectology of African American English. He is the author of numerous books on AAE, including ‘Spoken Soul: The Story of Black English’, ‘African American Vernacular English’, and his most recent book, ‘African American, Creole, and Other English Vernaculars in Education’.

Phillips in Linguistics Fri. Feb. 10 at 3:30

Colin Phillips of the University of Maryland will be presenting “Speaking, understanding, and grammar” in the Linguistics colloquium series Friday Feb. 10th at 3:30, in ILC N400. All are welcome!

Abstract. We speak and understand the same language, but it’s generally assumed that language production and comprehension are subserved by separate cognitive systems. So they must presumably draw on a third, task-neutral cognitive system (“grammar”). For this reason, comprehension-production differences are a thorn in the side of anybody who might want to collapse grammar and language processing mechanisms (i.e., me!). In this talk I will explore two linguistic domains from the perspective of comprehension and production. In the case of syntactic categories, I will show that the same underlying mechanisms can have rather different surface effects in comprehension and production. In the case of argument role information, I will show an apparent conflict between comprehension and production. In production, argument role information tightly governs the time course of speech planning. But in comprehension, initial prediction mechanisms seem to be blind to argument role information. I argue that both the similarities and contrasts can be captured under a view in which the same cognitive architecture is accessed based on different information, i.e., sounds for comprehension, messages for production. I will discuss the relation between this and other ways of thinking about comprehension-production relations.

Cheries in Cognitive Brown Bag Weds. Feb. 8 at noon

Erik Cheries (UMass) will present in the Cognitive Bag Lunch Wednesday, Feb. 8 at 12pm in Tobin 521B. All are welcome! Title and abstract follow.

Title: The Ins(ides) & Out(sides) of Infants’ Representations of Others

Abstract: Two cognitive biases exert an especially powerful influence on adults’ social reasoning. On the one hand, adults automatically judge the personalities and dispositions of other people based upon their outward appearance, especially from facial characteristics. On the other hand, adults’ causal explanations about others’ behavior is fundamentally biased towards internal properties and features that lie beneath the surface. What are the developmental origins of these powerful social reasoning biases? My talk will examine whether infants in the first year of life possess rudimentary forms of both types of heuristics.

Speech recognition talk in Machine Learning and Friends Thurs. Feb. 9 at noon

Sequence Prediction With Neural Segmental Models

Hao Tang of the Toyota Technical Institute at Chicago will speak in CS 150 at noon Thurs. Feb. 9 (arrive at 11:45 to get pizza).

Abstract:

Segments that span contiguous parts of inputs, such as phonemes in speech, named-entities in sentences, actions in videos, occur frequently in sequence prediction problems. Recent work has shown that segmental models, a class of models that explicitly hypothesizes segments, can significantly improve accuracy. However, segmental models suffer from slow decoding, hampering the use of computationally expensive features. In addition, training segmental models requires detailed manual annotation, which makes collecting datasets expensive.

In the first part of the talk, I will introduce discriminative segmental cascades, a multi-pass framework that allows us to improve accuracy by adding higher-order features and neural segmental features while maintaining efficiency. I will also show how the cascades can be used to speed up inference and training. In the second part of the talk, I will discuss end-to-end training for segmental models with various loss functions. I will address the difficulty of end-to-end training from random initialization by comparing it to two-stage training. Finally, I will show how end-to-end training can eliminate the need for detailed manual annotation.

Bio:

Hao Tang is a Ph.D. candidate at Toyota Technological Institute at Chicago. His main interests are in machine learning and its application to speech recognition, with particular interests in discriminative training and segmental models. His work on segmental models has been nominated for the Best Paper award at ASRU 2015, and an application of such models to fingerspelling recognition has earned a Best Student Paper Award at ICASSP 2016. He received a B.S. degree in Computer Science and a M.S. degree in Electrical Engineering from National Taiwan University in 2007 and 2010, respectively.

Wilson on Recognition Memory in Cognitive bag lunch Weds. Feb. 1 at noon

Merika Wilson (UMass) will present in the Cognitive Bag Lunch Wednesday, Feb. 1 at 12pm in Tobin 521B. All are welcome! Title and abstract follow.

Recognition Memory Shielded from Perceptual but not Semantic Interference in Natural Aging

In the Deese–Roediger–McDermott (DRM) paradigm, false memory for unstudied lures depends upon interference created by semantic associations between lures and studied items. It has been hypothesized that older adults have more false memories than young adults due to age-related structural changes in the medial temporal lobe (MTL). There is conflicting evidence as to whether this memory impairment in older adults is also present when the words on the list are perceptually related. In a modified DRM paradigm, we presented multiple interleaved lists of semantically related or phonetically/orthographically related words. Using signal detection theory to interpret our data, we found that older adults’ recognition memory performance was impaired less by perceptual interference than by semantic interference. Additionally, older adults were impaired less by perceptual interference than young adults, and impaired more by semantic interference than young adults. We suggest that inconsistencies in the previous literature on false memory in older adults may have stemmed from using false alarm rates as the dependent variable of interest, rather than using d-prime (which provides a measure of accuracy uncontaminated by response bias). Moreover, we interpret the present results in terms of age-related advantages, namely that older adults have more precise perceptual representations and richer associative semantic networks.

UMass CogSci Workshop this Friday at 2:30!

The 3rd annual UMass Cognitive Science Workshop will take place from 2:30 – 5 on Friday Feb. 3rd, 2017 in ILC N400. Talks by Rosie Cowell, Meghan Armstrong-Abrami, and Florence Sullivan will be followed by a poster session and reception. This is a chance not only to hear about exciting new research, but also to meet fellow cognitive scientists from across the campus, and discuss the further growth of CogSci at UMass.

Talks

2:30 Rosie Cowell, Cognitive Division, Psychological and Brain Sciences

3:00 Meghan Armstrong-Abrami, Hispanic Linguistics, Languages, Literatures and Cultures

3:30 Florence Sullivan, Education.

Posters

TitleAuthorsDepartment
Mothers' use of F0 after the first year of life in American English and Peninsular SpanishAlba Arias, Eduardo Garc’a, Isaac McAlister, Covadonga S‡nchez & Meghan ArmstrongSpanish and Portuguese, LLC
The acquisition of recursive locative prepositional phrases and relative clauses in child EnglishRoeper T., Sevcenco, A., & Pearson, B. Z.Linguistics
Syntactic and prosodic marking of focus in American English and Peninsular SpanishCovadonga S‡nchez-AlvaradoSpanish and Portuguese, LLC
Learning to Identify Speakers from Kinematic InformationAlexandra Jesse and Michael BartoliPsychological and Brain Sciences
Attention Modulates Cross-Modal Retuning of Phonetic Categories to SpeakersDavid Kajander, Elina Kaplan and Alexandra JessePsychological and Brain Sciences
Localized representations contain distributed information: insight from simulations of deep convolutional neural networksNicholas Blauch, Elissa Aminoff and Michael TarrBDIC (NB), Psychology (Fordham University; EA), Psychology (Carnegie Mellon University; MT), Center for the Neural Basis of Cognition (NB, EA, MT)
Linguistic Pressure and Dialect Change: Dominicans in MadridFiona DixonSpanish and Portuguese, LLC
A visualiation technique for Bayesian reasoningCara Bosco, Andrew Cohen, Jake Nadler, Jeff StarnsPsychological and Brain Sciences
Ambiguity Resolution in Relative Clauses: Prosodic vs. Contextual Information in L2 SpanishEduardo Garc’a-Fern‡ndezSpanish and Portuguese, LLC

Thomas on Safe Machine Learning in Data Science Weds. Jan. 31 at 4 p.m.

What: DS Seminar
Date: January 31, 2017
Time: 4:00 – 5:00 P.M.
Location: Computer Science Building, Room 151
A reception will be held at 3:40 P.M. in the atrium outside the presentation room.

Philip Thomas
Carnegie Mellon University
Safe Machine Learning

Abstract:
Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve super-human performance on various tasks. Ensuring that they are safe—that they do not, for example, cause harm to humans or act in a racist or sexist way—is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we can and should address now.

Philip will discuss some of his recent efforts to develop safe machine learning algorithms, and particularly safe reinforcement learning algorithms, which can be responsibly applied to high-risk applications. He will focus on a specific research problem that is central to the design of safe reinforcement learning algorithms: accurately predicting how well a policy would perform if it were to be used, given data collected from the deployment of a different policy. Solutions to this problem provide a way to determine that a newly proposed policy would be dangerous to use without requiring the dangerous policy to ever actually be used.

Bio:
Philip Thomas is a postdoctoral research fellow in the Computer Science Department at Carnegie Mellon University, advised by Emma Brunskill. He received his Ph.D. from the College of Information and Computer Sciences at the University of Massachusetts Amherst in 2015, where he was advised by Andrew Barto. Prior to that, Philip received his B.S. and M.S. in computer science from Case Western Reserve University in 2008 and 2009, respectively, where Michael Branicky was his adviser. Philip’s research interests are in machine learning with emphases on reinforcement learning, safety, and designing algorithms that have practical theoretical guarantees.

Special talk by Psyche Loui Friday Feb. 3 at 1:15

The Department of Music and the Cognitive Science Initiative are jointly sponsoring a visit by Psyche Loui of Wesleyan University, who will speak on “Emotion and Creativity in the Musical Brain” in ILC N400 from 1:15 to 2:15 on Friday Feb. 3. The abstract is below. This will be followed by the CogSci Workshop from 2:30 – 5 (poster submission deadline this Friday Jan. 27th).

Abstract. How does music, as patterns of intentional sounds, come to embody human creativity, to express emotions, and to encourage social bonding? In this talk I argue that statistical properties of the sound environment interact with biological constraints of the human brain, specifically its structural and functional connectivity, to give rise to multiple aspects of musical experiences. I will describe recent studies in my lab in which we identify the brain networks that enable strong emotional responses to music, and observe the effects of training in musical improvisation on brain and cognitive structure and function.