Monthly Archives: February 2017

John Rickford to deliver Freeman lecture Friday Feb. 17 at 2:30

“Justice for Jeantel (and Trayvon): Fighting Dialect Prejudice in Courtrooms and Beyond.”

Professor John Rickford of Stanford University will deliver this year’s annual Freeman Lecture in Linguistics, in ILC, N151 Friday Feb 17, 2017 at 2:30 pm

rickfordProfessor Rickford is a world-renowned expert in the structure, history, and dialectology of African American English. He is the author of numerous books on AAE, including ‘Spoken Soul: The Story of Black English’, ‘African American Vernacular English’, and his most recent book, ‘African American, Creole, and Other English Vernaculars in Education’.

Discussion of Bender (2016) in next CLC meeting, Feb 15th, 1pm

The next Computational Linguistics Community (CLC) meeting will take place at the NLP reading group on Wednesday, February 15, 1-2pm, in CS 303. We will be discussing a recent paper by Emily Bender on the role of linguistic typology in NLP:

Bender, Emily M. 2016. Linguistic Typology in Natural Language ProcessingLinguistic Typology 20(3):645-660.

The following link should take you to the UMass library’s full text of this paper:

UMass Library Full Text

 

 

 

Phillips in Linguistics Fri. Feb. 10 at 3:30

Colin Phillips of the University of Maryland will be presenting “Speaking, understanding, and grammar” in the Linguistics colloquium series Friday Feb. 10th at 3:30, in ILC N400. All are welcome!

Abstract. We speak and understand the same language, but it’s generally assumed that language production and comprehension are subserved by separate cognitive systems. So they must presumably draw on a third, task-neutral cognitive system (“grammar”). For this reason, comprehension-production differences are a thorn in the side of anybody who might want to collapse grammar and language processing mechanisms (i.e., me!). In this talk I will explore two linguistic domains from the perspective of comprehension and production. In the case of syntactic categories, I will show that the same underlying mechanisms can have rather different surface effects in comprehension and production. In the case of argument role information, I will show an apparent conflict between comprehension and production. In production, argument role information tightly governs the time course of speech planning. But in comprehension, initial prediction mechanisms seem to be blind to argument role information. I argue that both the similarities and contrasts can be captured under a view in which the same cognitive architecture is accessed based on different information, i.e., sounds for comprehension, messages for production. I will discuss the relation between this and other ways of thinking about comprehension-production relations.

Cheries in Cognitive Brown Bag Weds. Feb. 8 at noon

Erik Cheries (UMass) will present in the Cognitive Bag Lunch Wednesday, Feb. 8 at 12pm in Tobin 521B. All are welcome! Title and abstract follow.

Title: The Ins(ides) & Out(sides) of Infants’ Representations of Others

Abstract: Two cognitive biases exert an especially powerful influence on adults’ social reasoning. On the one hand, adults automatically judge the personalities and dispositions of other people based upon their outward appearance, especially from facial characteristics. On the other hand, adults’ causal explanations about others’ behavior is fundamentally biased towards internal properties and features that lie beneath the surface. What are the developmental origins of these powerful social reasoning biases? My talk will examine whether infants in the first year of life possess rudimentary forms of both types of heuristics.

Speech recognition talk in Machine Learning and Friends Thurs. Feb. 9 at noon

Sequence Prediction With Neural Segmental Models

Hao Tang of the Toyota Technical Institute at Chicago will speak in CS 150 at noon Thurs. Feb. 9 (arrive at 11:45 to get pizza).

Abstract:

Segments that span contiguous parts of inputs, such as phonemes in speech, named-entities in sentences, actions in videos, occur frequently in sequence prediction problems. Recent work has shown that segmental models, a class of models that explicitly hypothesizes segments, can significantly improve accuracy. However, segmental models suffer from slow decoding, hampering the use of computationally expensive features. In addition, training segmental models requires detailed manual annotation, which makes collecting datasets expensive.

In the first part of the talk, I will introduce discriminative segmental cascades, a multi-pass framework that allows us to improve accuracy by adding higher-order features and neural segmental features while maintaining efficiency. I will also show how the cascades can be used to speed up inference and training. In the second part of the talk, I will discuss end-to-end training for segmental models with various loss functions. I will address the difficulty of end-to-end training from random initialization by comparing it to two-stage training. Finally, I will show how end-to-end training can eliminate the need for detailed manual annotation.

Bio:

Hao Tang is a Ph.D. candidate at Toyota Technological Institute at Chicago. His main interests are in machine learning and its application to speech recognition, with particular interests in discriminative training and segmental models. His work on segmental models has been nominated for the Best Paper award at ASRU 2015, and an application of such models to fingerspelling recognition has earned a Best Student Paper Award at ICASSP 2016. He received a B.S. degree in Computer Science and a M.S. degree in Electrical Engineering from National Taiwan University in 2007 and 2010, respectively.