Course Description
Experimental methods and corpus annotation are becoming increasingly important tools in the development of syntactic and semantic theories. And while regression-based approaches to the analysis of experimental and corpus data are widely known, methods for inducing expressive syntactic and semantic representations from such data remain relatively underused. Such methods have only recently become feasible due to advances in machine learning and the availability of large-scale datasets of acceptability and inference judgments; and they hold promise because they allow theoreticians (i) to design analyses directly in terms of the theoretical constructs of interest and (ii) to synthesize multiple sources and types of data within a single model.
The broad area of machine learning that techniques for syntactic and semantic representation induction come from is known as representation learning; and while such techniques are now common in the natural language processing (NLP) literature, their use is largely confined either to models focused on particular NLP tasks, such as question answering or information extraction, or to ‘probing’ the representations of existing NLP models. As such, it remains difficult to see this literature’s relevance for theoreticians. This course aims to demonstrate that relevance by focusing on the use of representation learning for developing syntactic and semantic theories.
Area Tags: Syntax, Semantics, Computational Linguistics, Cognitive Science, Statistics
(Session 1) Part 1: Monday/Thursday 3:00-4:20
Part 2: Tuesday/Friday 10:30-11:50
Location: Part 1: ILC N101; Part 2: ILC S231
Instructor: Aaron White
Aaron Steven White is an Assistant Professor of Linguistics and Computer Science at the University of Rochester, where he directs the Formal and Computational Semantics lab (FACTS.lab). His research investigates the relationship between linguistic expressions and conceptual categories that undergird the human ability to convey information about possible past, present, and future configurations of things in the world. His work has appeared in a variety linguistics, cognitive science, and natural language processing venues, including Semantics & Pragmatics, Glossa, Language Acquisition, Cognitive Science, Cognitive Psychology, Transactions of the Association for Computational Linguistics, and Empirical Methods in Natural Language Processing.