Brain scientists discover meaning composition

If there is something that unites linguists of all stripes it’s the recognition that, in some way or other, natural languages systematically represent the cognitive distinction between the agent and the theme or patient of an action. The distinction is known to play a crucial role in the representation of verb meanings and the way they semantically compose with the verb’s arguments. 

An article in the Harvard Gazette of October 5 reports that two neuroscientists at Harvard discovered that the brain represents agents and patients/themes of actions in two distinct adjacent regions. The experiments are beautiful. This would be an occasion for linguists to celebrate if it wasn’t for the fact that the finding is being reported as a discovery about how the brain builds new thoughts. I don’t know who is ultimately responsible for aggrandizing the nature of the finding. Already in the original article in the Proceedings of the National Academy of Sciences the reader is led to believe that what was found went beyond the mere neural localization of a well-established semantic distinction. At the beginning of the PNAS article, the significance of the reported findings is described as follows:

“The 18th-century Prussian philosopher Wilhelm von Humbolt famously noted that natural language makes “infinite use of finite means.” By this, he meant that language deploys a finite set of words to express an effectively infinite set of ideas. As the seat of both language and thought, the human brain must be capable of rapidly encoding the multitude of thoughts that a sentence could convey. How does this work? Here, we find evidence supporting a long-standing conjecture of cognitive science: that the human brain encodes the meanings of simple sentences much like a computer, with distinct neural populations representing answers to basic questions of meaning such as “Who did it?” and “To whom was it done?” “

The bulk of this paragraph  describes no less than the research program of modern linguistics. How can the National Academy of Sciences tolerate such a disconnect between two disciplines within the Cognitive Sciences? How is it possible that a paper in neuroscience describes as a sensational new finding something that amounts to no more than the localization in the brain of a well-known and well-researched semantic distinction? We’ll never make progress in Cognitive Science if we allow subdisciplines to ignore each other so as to increase the perceived importance of one’s own results. 

One quote (attributed to Steven Frankland) in the Harvard Gazette article may point to the source of the communication problem: “This [the systematic representation of agents and themes/patients, A.K.] has been a central theoretical discussion in cognitive science for a long time, and although it has seemed like a pretty good bet that the brain works this way, there’s been little direct empirical evidence for it.” This quote makes it appear as if the idea that the human mind systematically represents agents and themes/patients has had the mere status of a bet before the distinction could be actually localized in the brain. That the distinction is systematically represented in all languages of the world is not given the status of a fact in this quote – it doesn’t count as  “empirical evidence”. It’s like denying your pulse the status of a fact before we can localize the mechanisms that regulate it in the brain. 

Headlines are routinely used for false advertising in the Cognitive Sciences. Science should be immune to those practices. The reported finding concerns a tiny aspect of meaning composition of the most simple kind and it implies nothing about the exact mechanism of composition. There is no empirical basis for drawing grand conclusions about “how the brain builds new thoughts” or about the brain “architecture for encoding sentence meaning.” Exaggerated headlines should be on the list of unscholarly behaviors that journal editors might want to discourage. 

Sources: How the brain builds new thoughts (Harvard Gazette, October 8) and the September 15) issue of the Proceedings of the National Academy of Sciences: An architecture for encoding sentence meaning in left mid-superior temporal cortex.  

Beth Stevens: Learning by cutting connections?

Source: MacArthur Foundation

We are born with an excess of synaptic connections. Through a normal developmental process called “pruning”, some of those synaptic connections get cut. Which ones? How are synaptic connections selected for elimination? What does all of this mean for our theories of how linguistic patterns and structures are acquired by children? 

“Beth Stevens is a neuroscientist whose research on microglial cells is prompting a significant shift in thinking about neuron communication in the healthy brain and the origins of adult neurological diseases. Until recently, it was believed that the primary function of microglia was immunological; they protected the brain by reducing inflammation and removing foreign bodies.

Stevens identified an additional, yet critical, role: the microglia are responsible for the “pruning” or removal of synaptic cells during brain development. Synapses form the connections, or means of communication, between nerve cells, and these pathways are the basis for all functions or jobs the brain performs. Using a novel model system that allows direct visualization of synapse pruning at various stages of brain development, Stevens demonstrated that the microglia’s pruning depends on the level of activity of neural pathways. She identified immune proteins called complement that “tag” (or bind) excess synapses with an “eat me” signal in the healthy developing brain. Through a process of phagocytosis, the microglia engulf or “eat” the synapses identified for elimination. This pruning optimizes the brain’s synaptic arrangements, ensuring that it has the most efficient wiring.”

Related articles: Microglia: New Roles for the Synaptic Stripper (Neuron) and Phagocytic cells: sculpting synaptic circuits in the developing nervous system

Your tongue and the octopus

From USC News: “A linguist and a marine biologist at the USC Dornsife College of Letters, Arts and Sciences began an unlikely project two years ago to compare the movement of the human tongue with the manipulation of the arms of the octopus and the undulation of a small worm known as C. elegans. …

Titled “Dynamical Principles of Animal Movement,” the project is supported by the National Science Foundation. Its principal investigators at USC Dornsife are Khalil Iskarous, assistant professor of linguistics, and Andrew Gracey, associate professor of biological sciences.

As a linguist, Iskarous hopes the research will help explain how movements of the human tongue are compromised by Parkinson’s disease, but he said the NSF research is aimed at broader questions of motor control.”

Animal cognition: the cat and the octopus

This week’s issue of Nature reports that the octopus genome is almost as large as that of humans, and it contains a greater number of protein coding genes. Neurobiologist Benny Hochner from the Hebrew University of Jerusalem in Israel has studied octopus neurophysiology for 20 years: “It’s important for us to know the genome, because it gives us insights into how the sophisticated cognitive skills of octopuses evolved. Researchers want to understand how the cephalopods, a class of free-floating molluscs, produced a creature that is clever enough to navigate highly complex mazes and open jars filled with tasty crabs.” The video above, which comes with the Nature article, shows (among other things) an octopus opening a jar. Here is a video of our cat Willy doing the same thing: opening a jar with food for him. Willy can also open refrigerators. We had to attach a kids’ lock to our refrigerator to prevent him from robbing our dinner. 

The neural code for belief ascriptions

koster-Hale

Jorie Koster-Hale

When I read in the February 28, 2014 issue of Science that the neural code for phonetic features had been discovered, I felt that this was a major breakthrough. I also felt, though, that marketing this finding as the discovery of “the neural code that makes us human” was greatly exaggerated. I wished at the time that similar discoveries could be made about semantic features. I did not know then that Jorie Koster-Hale’s dissertation (September 2014) would report precisely those kinds of findings. I think her discoveries are game-changing for all of us who are interested in the semantics and acquisition of attitude ascriptions. The prospect of being able to find the neural code for at least some of the features appearing in the semantic representations of belief ascriptions may profoundly change the way we do semantics. We will be held to much higher standards. For the first time, we may be able to do what – even a few weeks ago – I thought we weren’t yet able to do: relate some of the crucial features of semantic representations to a neural level of representation.

Among other things, Koster-Hale found that epistemic properties of other people’s beliefs are represented via response patterns of neural populations in canonical Theory of Mind regions. These properties relate to the kind of evidence that ground a belief: whether it was good evidence or not, or whether it was visual or auditory evidence. These properties trigger a major distinction in the class of attitude verbs: verbs in the believe family (believe thatsuspect thatconjecture that) can be used to report false beliefs, while verbs in the know family (know thatdiscover thatreveal that, hear that, see that) cannot – those verbs can only describe attitudes that are properly connected to reality. The very same epistemic properties are also known to be grammaticalized in evidential paradigms across languages, as Koster-Hale points out (see Aikhenvald 2004).

Related article in Cognition: “Thinking about seeing: Perceptual sources of knowledge are encoded in the theory of mind brain regions of sighted and blind adults.”

Find your most interesting question

Nicola Spaldin

From Science, 3 July 2015. Video

“Frighteningly, I have reached the stage in my career when young people often ask me for advice. My safe and sensible side tells me to pass along the same advice I received: Make a solid contribution to an established field and publish a lot to become known and respected by your community. Save the high-risk stuff until after tenure. But, deep down, I hope young scientists—you—will choose not to follow that advice. I hope instead you will find the question that for you is the most interesting in the world, go after its answer with all your youthful passion, and pioneer your own science revolution.”

From Physics Central: “Prof. Spaldin spends her time researching how to get materials to do multiple tasks. She uses calculations and simulations to design materials that combine more than one function. “Basic laws say this property can’t coexist with this property and we try to get around that,” she said. For example, “In your laptop, a semiconductor material processes the information and a magnetic material stores the information. If we could process and store in the same piece of stuff, we could make your computer smaller, lighter and able to use less power.” Her research continues to have real implications in technology as iPods shrink and PCs become the size of notepads.”

I was asked why I am featuring a theoretical materials scientist in my SEMANTICS notebook. One reason is that I love her career advice. The other reason is that I find it very useful for my own work to take glimpses into other disciplines. The way Spaldin uses computational modeling to find ways of combining apparently incompatible properties of materials is truly fascinating and inspiring. Incidentally, the bust Spaldin is almost hiding behind in the picture above (taken from her website) is the bust of Robert Schumann. Why should a theoretical materials scientist portray herself as peeking out from behind a bust of Robert Schumann? 

What I enjoyed most during my year as a fellow at the Radcliffe Institute was the company of scholars and artists from many different fields. Hearing about their work and trying to understand what makes them tick inspired my own work. I have no idea why and how, but it did. I am using my notebook to hold on to that spirit.  

Handbook of African American Language

The Oxford Handbook of African American Language. Edited by Sonja Lanehart. Oxford University Press. 2015.

From the Publisher’s description: “The goal of The Oxford Handbook of African American Language is to provide readers with a wide range of analyses of both traditional and contemporary work on language use in African American communities in a broad collective. The Handbook offers a survey of language and its uses in African American communities from a wide range of contexts organized into seven sections: Origins and Historical Perspectives; Lects and Variation; Structure and Description; Child Language Acquisition and Development; Education; Language in Society; and Language and Identity. It is a handbook of research on African American Language (AAL) and, as such, provides a variety of scholarly perspectives that may not align with each other — as is indicative of most scholarly research. The chapters in this book “interact” with one another as contributors frequently refer the reader to further elaboration on and references to related issues and connect their own research to related topics in other chapters within their own sections and the handbook more generally to create dialogue about AAL, thus affirming the need for collaborative thinking about the issues in AAL research.” A selection of chapters: 

Syntax and Semantics: Lisa J. Green and Walter Sistrunk.

The Systematic Marking of Tense, Modality and Aspect in African American Language: Charles E. DeBose.

Dialect Switching and Mathematical Reasoning Tests: Implications for Early Educational AchievementJ. Michael Terry, Randall Hendrick, Evangelos Evangelou, and Richard L. Smith. Related article in Lingua.

Our universities: the outrageous reality

Article by Andrew Delbanco in The New York Review of Books. July 9, 2015.

“Death may be the great equalizer, but Americans have long believed that during this life “the spread of education would do more than all things else to obliterate factitious distinctions in society.” These words come from Horace Mann, whose goal was to establish primary schooling for all children—no small ambition when he announced it in 1848.”

“Perhaps concern for the poor has shriveled not only among policymakers but in the broader public. Perhaps in our time of focus on the wealthy elite and the shrinking middle class, there is a diminished general will to regard poor Americans as worthy of what are sometimes called “the blessings of American life”—among which the right to education has always been high if not paramount.”

I teach at a public university, UMass Amherst. In-state students pay more than $14,000 in tuition and fees to attend. With room and board, the total cost is more than $25,000. In 2012, Massachusetts spent only 0.3% of its economic resources on higher education – less than 47 other states in the US.

“We are now in a transitional place, where we understand college to be as essential to success as high school was understood to be in the middle of the last century, and yet we charge citizens thousands of dollars to get a college education.” Clawson & Page 2011

On training graduate students

Here is some good advice (and a book recommendation) for PhD students and advisors of PhD students by Sundar Christopher. The book targets the graduate student experience in the US, where students often enter PhD programs right after College. Several European institutions have developed attractive PhD programs that combine the best features of the American system, which places a lot of emphasis on training, and the traditional European system, which places a lot of emphasis on independence. Those programs are ‘apprentice-style’: they include formalized training components, but also allow students to be contributing researchers from the very start. The entrance requirement is usually an MA, which means that students are already weaned off textbooks by the time they begin their PhD. They are ready and eager to see themselves as researchers. I myself grew up in the old-style European system, which had no formalized training for PhD students. By sheer luck, I was apprenticed by the best possible teachers you could imagine and confronted with the best research programs in formal semantics at the time.

When I moved to the US, the American, ‘school-like’, system of graduate student education was new to me. To fit in, I must have overdone it at the beginning of my UMass career. One day, Barbara Partee invited me for dinner at her house together with Terry Parsons, who was visiting. I don’t remember much about that evening except for one thing that stuck in my mind. Barbara told both of us that we ‘overtaught’ our graduate students, that we ‘overprepared’ our graduate classes, and that we were too ‘controlling’ of the discourse in the class room. I took it to heart. I learned (I hope) to treat even the youngest graduate students as researchers who need guidance and help, not so much with textbook material, which they can easily absorb on their own, but with groping in the dark, seeking out their own challenges, getting used to unknown terrain, and feeling comfortable about asking probing questions. I realized that teaching PhD students ‘by the book’ (even if it’s your own) is very easy to do (no preparation required), but it is also a sure recipe for failure in the long run. Students who are taught by the book won’t be ready to do original work by the time they have to write their qualifying papers. They will remain dependent for too long. 

When I visited the University of Maryland during NASSLLI last year, I got to know some of the graduate students there. I saw that even first year semantics students were already working on their own original research projects. It’s not that they weren’t taking any classes. They were, but they were at the same time contributing members of research groups. This is also the way PhD students are trained in the Berlin School of Mind and Brain or at ILLC in Amsterdam. The PhD students in all three of those programs are enthusiastic and self-confident scholars who are deeply immersed in their research, mentored by specialists in their field, as well as by a Graduate Program Director. Maryland has a post-baccalaureate program providing a bridge between College and Graduate School. The Berlin and Amsterdam programs have associated MA programs that offer introductory and more specialized graduate-level classes, as well as ‘methods’ classes. In addition, there are workshops and mini-courses targeting PhD students. I think programs like these might be the future of graduate education in the Cognitive Sciences, including linguistics. The time of the ’60s-style PhD program in linguistics that was so successful in the US may be over. Training in linguistics is bound to become more like training in the sciences. 

Yale University has an interesting combined PhD program in Philosophy and Psychology. Since Zoltan Szabo, Jason Stanley, and Larry Horn are members of the Philosophy faculty at Yale, the program offers the intriguing possibility of PhD-level training in formal semantics and pragmatics in combination with both philosophy and psychology. Rather than acquiring breadth and well-roundedness within a Linguistics program by taking classes in, say, Phonetics or Phonology, semantics graduate students at Yale can acquire a very different kind of breadth via a formal connection with the departments of Philosophy or Psychology. Jonathan Phillips‘ profile on his website is a good example of a specialization in Formal Semantics in the context of Philosophy and Psychology. The Yale program requires students to affiliate with one ‘home department’. This is important because  jobs are still mostly allotted following traditional departmental lines. 

PhD students also have to think about what is ahead of them. Here is a link to a very useful book about preparing students for life after the PhD: “A PhD is not enough” by Peter Feibelman.

The Ocean, the Bird, and the Scholar

Helen Vendler: The Ocean, the Bird, and the Scholar. Essays on Poets and Poetry. Harvard University Press. 2015. Review in today’s Times Higher Education.

“My first puzzle as an adolescent was how could one know, if one had never seen a ballet before, that one dancer just hadn’t done a certain movement adequately? How could one perceive that something was wrong in the phrasing of a musical moment, if one had never heard that aria before? The presence of an invisible contour of the perfect inhabiting the mind and testing all performances against itself was amazing to me.” Helen Vendler

“In the end, Vendler majored in chemistry, and then won a Fulbright scholarship to study mathematics in Belgium. While there, she obtained permission to study English literature instead […]. After an unsuccessful application to Harvard for graduate study, she prepared for a second attempt by taking six courses in English a term at Boston University, and was finally admitted to Harvard’s PhD programme. When she arrived to register, the chairman of the department told her, “You know we don’t want you here, Miss Hennessy: we don’t want any women here.” She was shaken, but continued.” Elizabeth Greene, from the review.