Skip to main content

Recent news from the MIT Quest for Intelligence

In spring, the green grass and blue skies bring new energy to all our endeavors — and at the Quest, the past few months have seen remarkable activity in all our Missions and Platforms.

We’re excited to share a few updates with you — and we hope to engage in person, whether in Cambridge or elsewhere.
  • letters and symbols against a blue sky
    View from the interior of The Alchemist, a sculpture by Jaume Plensa, located on the MIT campus
    Lillie Paquette, photographer

Mission Spotlight: Language Intelligence 

The ability to communicate using language is a unique and powerful human capability. It allows us to transmit information about the external world and our own thoughts and feelings, learn about events we did not witness, and pass down knowledge from generation to generation. In daily life, language often acts as an interface to higher-level cognition. We use language to communicate a problem, provide the evidence required to solve that problem, and, eventually, deliver a solution. 

Co-led by Professors Ev Fedorenko, Jacob Andreas, and Roger Levy, the Language Mission brings together researchers from computer science, neuroscience, cognitive science, and linguistics to create a robust, theoretically-motivated, and empirically grounded framework for studying and improving world knowledge and reasoning capabilities in Large Language Models, using an understanding of human cognition to make models better, and using models as a tool to understand human language and cognitive processing.

Troland Research Award 

Associate Professor Evelina Fedorenko was awarded the Troland Research Award from the National Academy of Sciences (NAS) in January for her research into the human brain’s language processing network. She uses a combination of brain study and computational modeling to advance research into the uniquely human ability to produce and comprehend language.

(How) Do Language Models Track State? 

EECS Associate Professor Jacob Andreas and collaborators have published have published “(How) Do Language Models Track State?” They prove not only that transformer language models can learn to keep track of complex evolving situations effectively and efficiently, but also that researchers can predict and influence the model’s choices through training.

OneStop Eye Movements 

Professor Roger Levy and collaborators have recently published OneStop Eye Movements, a large-scale English corpus of eye movements in reading with 360 participants and a total of 2.6 million words. OneStop is an unprecedented resource of high-quality reading comprehension materials which aims to enable new research avenues in the study of reading and human language processing. This effort was partially funded by the Quest. 

A bronze statue of FDR's dog Fala in Washington, D.C.. The Intelligence Observatory

New progress in our understanding of intelligence results from interactions between experimental behavioral observations and machine executable models of how those behaviors arise from underlying mechanisms. To that end, the Intelligence Observatory will build modern, scalable frameworks to plan, execute, and disseminate large, high-precision surveys of natural cognition in varied environments — both virtual and physical.

Training Machines to see in 3D

Mural of the story of the blind men and the elephant, located at MIT's Hayden Library.

While humans comprehend the shape of a three-dimensional object just by looking at it, machine learning models don't have the same spatial reasoning capabilities. Quest scientists sought to understand why in a new study published in Open Mind: Discoveries in Cognitive Neuroscience, and found ways to close the gap between human and machine perception. Learn more about their work.