Author Archives: Joseph Pater

SCiL is meeting this week!

The Society for Computation in Linguistics is meeting this week. Today it got started with a plenary talk by Naomi Feldman, the recording of which is now available to registered participants. The schedule is here: https://www.scil2021.org/schedule-1
To register go here: https://www.scil2021.org/registration (free for students, $20 for others). Once registered, you can get the Zoom and GatherTown links here: https://www.scil2021.org/basic-05-1 (you can also get details for how the conference is being run there).

The SCiL proceedings are now available here: https://scholarworks.umass.edu/scil/.

GLSA publications now available in ScholarWorks!

The Graduate Linguistics Students Association is now making many of its older publications available through UMass Amherst library’s open access ScholarWorks platform. This is a great resource – NELS proceedings up until 2002, University of Massachusetts Occasional Papers up until 2007, and Semantics of Under-Represented Languages in the Americas to 2003. Huge thanks to Andrew Lamont and Tom Maxfield for their work on this project, as well as Erin Jerome of the UMass library.

Newer publications are available for sale on the GLSA website. One highlight of the open access release is the appearance of UMOP 37: Semantics and processing, which had remained unpublished until now.

Nelson, Pater and Prickett UCLA colloquium

Max Nelson, Joe Pater and Brandon Prickett presented “Representations in neural network learning of phonology” in the UCLA colloquium series Friday October 9th. The abstract is below, and the slides can be found here.

Abstract. The question of what representations are needed for learning of phonological generalizations in neural networks (NNs) was a central issue in the applications of NNs to learning of English past tense morphophonology in Rumelhart and McClelland (1986) and in following work of that era. It can be addressed anew given subsequent developments in NN technology. In this talk we will present computational experiments bearing on three specific questions:

Are variables needed for phonological assimilation and dissimilation?  

Are variables needed to model learning experiments involving reduplication (e.g.  Marcus et al.  1999)?  

What kind of architecture is necessary for the full range of natural language reduplication?