Course Description
The increased availability of data on speech articulation over the last few decades has provided a new empirical basis for theorizing about the cognitive control of speech. This course will cover empirical facts about speech articulation and theories that have been proposed to explain them. On the empirical side, we’ll cover observations about articulatory kinematics coming from Ultrasound, Electromagnetic Articulography, and real-time MRI. One focus will be cross-language comparison. We’ll explore articulatory dimensions on which languages vary and how that variation relates to other aspects of linguistic structure. The empirical facts will provide a basis for discussing theories of the cognitive control of speech. Course assignments will include hands-on exercises visualizing and modeling articulatory kinematic data. This is an intermediate course in Articulatory Phonetics. As background, students should have the equivalent of an undergraduate phonetics course, including familiarity with International Phonetic Alphabet and vocal tract anatomy. Assignments will use the Matlab computing environment; however, previous experiment with Matlab is not required.
Area Tags: Phonetics, Phonology, Speech, Cognitive Science, Language Production
(Sessions 1 & 2) Monday/Thursday 10:30am – 11:50pm
Location: ILC N101
Instructor: Jason Shaw
Jason A. Shaw is an associate professor in the Department of Linguistics at Yale University and associated editor of Laboratory Phonology. He obtained his PhD in linguistics in 2010 from New York University. Before joining Yale in 2016, he did research in Australia supported by the Australian Research Council Discovery Early Career Research Award and in Japan supported by the Japan Society for the Promotion of Science and was a faculty member in linguistics at Western Sydney University. His research investigates how phonological form structures natural variation in speech and how this variation is interpreted by listeners. His approach combines language description with formal computational models and experimental methods that capture the temporal unfolding of speech planning, production, and perception. Experimental methods used in his research include eye-tracking in speech perception experiments and Electromagnetic Articulography (EMA) and ultrasound in speech production experiments.