Time: 12:00pm to 1:15pm Wednesday Oct. 12.
Location: Tobin 521B
Roger Levy (M.I.T.)
https://bcs.mit.edu/users/rplevymitedu
Probabilistic models of human language comprehension
Human language use is a central problem for the advancement of machine intelligence, and poses some of the deepest scientific challenges in accounting for the capabilities of the human mind. In this talk I describe several major advances we have recently made in this domain that have led to a state-of-the-art theory of language comprehension as rational, goal-driven inference and action. These advances were made possible by combining leading ideas and techniques from computer science, psychology, and linguistics to define probabilistic models over detailed linguistic representations and testing their predictions through naturalistic data and controlled experiments. In language comprehension, I describe a detailed expectation-based theory of real-time language understanding that unifies three topics central to the field — ambiguity resolution, prediction, and syntactic complexity — and that finds broad empirical support. I then move on to describe a “noisy-channel” theory which generalizes the expectation-based theory by removing the assumption of modularity between the processes of individual word recognition and sentence-level comprehension. This theory accounts for critical outstanding puzzles for previous approaches, and when combined with reinforcement learning yield state-of-the-art models of human eye movement control in reading.