Securing the ‘internet of things’ in the quantum age
Anantha Chandrakasan (MIT) Quantum computers could potentially break the encryption codes of classical computers; a promising cryptographic scheme called lattice-based cryptography could provide protection. It hides data in complex mathematical structures, but is computationally intense for smartphones and other embedded devices. In this project, MIT researchers have designed an energy-efficient chip architecture to speed up lattice-based encryption and authentication algorithms.
A better way to manage inventory
Georgia Perakis (MIT) Companies that manage to stock just enough inventory to meet demand usually have higher profit margins. But accurately forecasting demand at each step in a long supply chain is a tricky process, often because important data points are missing. In this project, researchers are developing an interpretable method that combines machine learning and inventory optimization to make more accurate predictions so companies can make better decisions.
Retraining AI systems to perform new tasks with fewer data
Asu Ozdaglar (MIT) The goal of meta learning, or learning to learn, is to harness knowledge from one set of tasks to learn how to perform a new set of tasks. In this project, researchers are developing a meta learning approach that can be applied to many AI problems regardless of the model. It works by matching a generic model with an adaptation policy that updates the model’s parameters, or weights, once it’s deployed in the field. The approach is aimed at developing flexible models that can shift to new environmental conditions with few extra training steps.
Recommendation algorithms that anticipate the next trend
Negin Golrezaei (MIT) Recommender algorithms have become remarkably good at predicting what movies or products we might like based on past viewing and shopping habits. But preferences change due to external factors that can’t be observed, making longer-term predictions much less accurate. In this project, researchers will attempt to identify and exploit time-varying trends through a mix of acquiring new information and capitalizing on existing observations. The work has relevance for automated financial trading and supply chain management, among other applications.
Faster video processing using insights from the brain
Aude Oliva (MIT); Rogerio Feris (IBM) Animals and humans are bombarded with visual information as we move about the world. But the brain manages to extract just enough detail to make sense of it all and not get overwhelmed. The virtual world of video, by contrast, contains far more information, much of it redundant. Each second of a clip may contain more than a dozen frames that are unnecessary to understanding the events that are unfolding. In this project, researchers are experimenting with ways to cut as many frames as possible to build video-recognition models that are substantially smaller and more efficient than those used today.
A chip that uses 10 times less energy than a mobile GPU
Vivienne Sze, Joel Emer (MIT) Storing and reusing data locally, across a chip’s processing cores, speeds up the training of AI models by reducing data transportation costs. It also improves inference, allowing applications to run faster. But large and small models vary widely in shape, calling for hardware with versatile processing capabilities. MIT researchers have designed such a chip: Eyeriss 2. It uses an on-chip network to adaptively reuse data and adjust to the bandwidth requirements of different models, using 10 times less energy than a mobile GPU. The chip’s designers recently wrote a book on this emerging field: Efficient Processing of Deep Neural Networks.
Toward a more human-like robot
Brian Williams (MIT) Robots are venturing into physically demanding jobs and places that are either too difficult or too dangerous for humans to explore. Programming a fleet of robots to clean up a radioactive site or inspect a cable at the bottom of the ocean requires a level of precision, autonomy, and reasoning power that is currently beyond reach. In this project, researchers are using a platform called The Incredible Machine to test ways of training robots to both plan and execute a set actions, gather feedback from those actions and weigh the risks of performing them, to make them better equipped to navigate complex situations.
The robot radiologist moves into the operating room
Polina Golland (MIT) AI promises to revolutionize medical imaging with its ability to pick out signs of disease too subtle for the human eye to detect. The rise of robot radiologists could lead to earlier diagnosis of diseases and lower the cost of health care. But the benefits don’t stop there. Image-reading algorithms could have an even greater impact in the operating room. In this project, researchers are applying AI to ultrasound scans of arteries and other tissue before and during surgery. They hope that more detailed feedback in real time can improve patient outcomes.
Brain-inspired computer vision models for improved performance
James DiCarlo (MIT); David Cox (IBM) Computer vision models can be trained to recognize objects nearly as well as humans, but they have one major flaw: slight changes to an image can cause the model to make mistakes a human never would. To boost their performance, neuroscientists are building new computer models directly guided by the brain. In ongoing work, researchers at MIT, IBM, and Harvard are using knowledge of the brain’s early visual processing to create computer vision models that “see” the world more like us, and are thus less susceptible to egregious mistakes.
Bringing deep learning to “Internet of things” devices
Song Han (MIT) The branch of AI that curates your social media feed and serves up search results could soon check your vitals or set your thermostat. MIT’s Song Han is working to bring deep neural networks to the tiny computer chips in wearable medical devices, household appliances, and the billions of other gadgets that make up the “internet of things.” One recent system to come out of his lab, MCUNet, designs compact neural networks that allows AI applications to run smoothly on IoT devices despite their limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.
Making AI-generated fakes stand out
Aude Oliva (MIT) The ability to distinguish real photos and videos from so-called ‘deepfakes’ is becoming more difficult as tools for manipulating images grow more sophisticated. In this project, researchers are exploring ways to magnify the tiny errors produced in fabricating a video using deep generative models. If subtle blemishes can be rendered more noticeable, humans and machines will be able to quickly pull fake content from platforms before they poison the public discourse. Inspired by the science of optical illusions, researchers have designed a method that makes subtle video edits jump out to the observer’s naked eye.
Creating images that stick in the mind’s eye
Phillip Isola and Aude Oliva (MIT) Creating memorable images is both an art and a science as the rising popularity of deep generative models shows. To understand what makes an image unforgettable, researchers gathered data from human subjects and trained an image-generating model on those preferences. Their creation, GANanalyze, draws pictures it thinks people will remember best: images with bright colors, simple backgrounds and centered subjects. Applications include ways to detect and treat memory loss, and design graphics that help people retain information.
A mental model of the world that can be reused across multiple tasks
Phillip Isola (MIT) In a virtual world, robot agents learn new tasks by training on data, seeking guidance from a robot teacher or following an external reward signal like maximizing points in a game. These methods have allowed AI systems to surpass humans at chess, Go and many video games. But transferring these techniques to real-world robots has proven challenging. In this project, researchers will turn virtual agents loose in a world lacking external rewards or instruction, and instead encourage the agents to discover knowledge on their own. Researchers hope that as the agents explore, and their mental model improves, so will their ability to generalize their knowledge to new tasks.
A model to limit the spread of Covid-19 as campus reopens
Deborah Campbell, Joshua Joseph, Jonathan Pitts, Nick Roy (MIT) As MIT leaders plan to bring some students back to campus in the fall, they will be looking to a new model under development to keep the virus in check. The MIT Covid-19 Response System, or MCRS, will help MIT leaders to identify new Covid-19 infections, and to make decisions aimed at limiting new cases. A collaboration between MIT and MIT Lincoln Laboratory researchers, MCRS will integrate available anonymized data, including Covid-19 test results, traffic in and out of campus buildings and daily questionnaire results. A list of frequently asked questions is available here: https://covid-19.mit.edu/mit-covid-19-response-system-mcrs-faq
Reasoning about dynamic events and their relation to one another
Aude Oliva (MIT) The ability to recognize someone skiing from someone running, and to understand that both activities qualify as exercise, comes naturally to humans. Researchers are trying to build something similar into machines. By combining techniques in computer vision and natural language processing, researchers have built a model that recognizes visual concepts like running and skiing, links them to their linguistic representations and learns to associate those terms with an abstract concept like “exercising.” If machines can learn to reason about dynamic events, they may be able to learn from far fewer data, say researchers.
Reverse engineering a child’s common-sense reasoning
Vikash Mansinghka, Joshua Tenenbaum (MIT) For all the gains that deep learning models have made on task-specific benchmarks, they struggle to grasp causal relationships and how physical objects and people interact. To improve how models perform in real-world settings, researchers are trying to reverse engineer a child's ability to perceive the world in three dimensions, manipulate physical objects, and infer the mental states of other people. The researchers are using their recently developed probabilistic programming language, Gen, to remove errors from the output of deep learning algorithms using symbolic generative models. Their end goal is an end-to-end AI architecture that can be used in robotics, including on computer vision and cognitive tasks.
A home robot that coaches you through life’s ups and downs
Cynthia Breazeal, Sooyeon Jeong (MIT) Anxiety and depression are on the rise as more of our time is spent staring at screens. But if technology is the problem, it might also be the answer. In this project, MIT professor Cynthia Breazeal’s home robot Jibo is being redeployed as a personal wellness coach, programmed to read and respond to people’s moods with advice. If Jibo senses you’re down, for example, he might suggest a “wellness” chat and some positive psychology exercises, like writing down something you feel grateful for.
A robot storyteller that promotes parent-child bonding
Cynthia Breazeal, Huili Chen (MIT) Not all parents have time to regularly read to their children. What if a home robot could fill in, or even improve the quality of parent-child reading time? In this project, researchers are recording parents as they read to their children, and analyzing video, audio, and physiological data from those interactions to understand how robots can augment learning. The goal is train robots to strengthen parent-child bonding and provide helpful discussion prompts as parent and child read and interact.
Training kids in robotics and AI
Cynthia Breazeal, Randi Williams (MIT) AI gives us personalized recommendations and much more, but it also has limitations and biases. In collaboration with the nonprofit i2 Learning, MIT professor Cynthia Breazeal and her colleagues have developed an AI curriculum around a robot named Gizmo that teaches kids how to train their own robot with an Arduino micro-controller and a user interface based on Scratch-X, a drag-and-drop programming language for children. The curriculum also trains kids to think critically about their software and hardware creations. A hundred Gizmo robots are currently being demoed in Boston and rural Massachusetts.
Testing why the human brain evolved specialized capabilities
Nancy Kanwisher, Katharina Dobs (MIT) Parts of the human brain perform highly specific functions, from recognizing faces to understanding language to reflecting on the thoughts of others. Why might the brain’s domain-specific organization be a good design strategy? In this project, researchers will use neural networks to test whether face and object-specific perception in primates emerge naturally from learning to excel at both tasks. Early results suggest that the distinct pathways found in the cortex may reflect a computational optimization over development and evolution for the real-world tasks humans solve.
Understanding why familiar faces stand out
Nancy Kanwisher, Katharina Dobs (MIT) With a quick glance, the faces of friends and acquaintances jump out from those of strangers. How does the brain do it? In ongoing research, Nancy Kanwisher and her lab are running experiments on artificial neural networks to test new ideas for how the brain processes faces. Two key findings so far: the brain starts to register the gender and age of a face before recognizing its identity, and that face perception is more robust for familiar faces.
Speeding up the archival process
Nicholas Roy, Katherine Gallagher (MIT) Each year, MIT Libraries' Distinctive Collections receives a massive donation of personal letters, lecture notes, and other materials that tell MIT’s story and document the history of science and technology. Each unique item must be organized and described, with a typical box of material taking up to 20 hours to process and make available to users. To make the work go faster, MIT Quest is developing an automated system for processing donated archival material. Ultimately, they will develop an AI pipeline to categorize and extract data from scanned images of the records.
Making more livers available to patients who need them
Nicholas Roy, Katherine Gallagher (MIT) To approve a liver for transplant, pathologists study a slice of tissue and estimate whether its fat content qualifies as low enough. But there are often too few qualified doctors to review tissue samples on the tight timeline needed. Viable livers are inevitably discarded. In this project, researchers are training a deep learning model to pick out globules of fat on a slide to estimate the liver’s overall fat content. Eventually, the model will learn to isolate fat globules in unlabeled images on its own, outputting an estimate with the fat globules circled.
Identifying athletes from a few gestures
Nicholas Roy, Katherine Gallagher (MIT) Footage of professional sports games is a potential goldmine for analysts looking to track player performance over a season or more, but automated tracking has proven to be technically challenging. In a project with MIT’s Aerospace Computational Design Laboratory, Quest engineers developed a pipeline for training AI models to use key point data to recognize athletes from a few gestures, paving the way for greater automated collection of player statistics. Other applications for the work include helping athletes refine their technique.
Advanced Composites for Lighter, Stronger Aircraft
Nicholas Roy, Joshua Joseph, Brian Wardle (MIT) After decades of development, an advanced composite made of carbon-reinforced polymer is finding its way into commercial airplanes. Engineers are now looking for an even lighter, stronger variation that can conduct heat and electricity. At MIT, researchers are testing polymers embedded with tiny, carbon tubes and imaging them under extreme stress. Each test produces thousands of CAT scan-like images that require painstaking analysis. To expedite the process, researchers are training an AI model to identify cracks in the plastic composite. Once implemented, the tool will save time and improve analysis quality. (Image: Brian Wardle)
Pinpointing new dendritic spines to understand how memories form
Nicholas Roy, Katherine Gallagher, Michele Pignatelli, Susumu Tonegawa (MIT) Improved imaging techniques have allowed neuroscientists to see up close the tiny nubs on brain cell dendrites that grow and change shape as memories form. By tracking the evolution of dendritic spines in cells linked to a single memory trace, before and after a learning episode, researchers can estimate where memories may be physically stored. But hand-labeling these before and after images is tedious and time-consuming. In this project, researchers are training a model to automatically identify new spines, and potentially new memory traces.
Training a robot agent to design a more efficient nuclear reactor
Koroush Shirvan, Nicholas Roy, Joshua Joseph (MIT) One important factor driving the cost of nuclear power is the layout of its reactor core. If fuel rods are arranged in an optimal fashion, reactions last longer, burn less fuel and need less maintenance. In this project, researchers are applying reinforcement learning algorithms to find the best way to safely configure fuel rods to make nuclear energy less expensive. Starting with the random 64-rod layout at left, the agent will learn the best way to configure the three types of fuel rods represented by the numbers in red, blue and green.
Overcoming manufacturing and supply hurdles to provide global access to a coronavirus vaccine
Anthony Sinskey, Stacy Springs (MIT) A vaccine against SARS-CoV-2 would be a crucial turning point in the fight against Covid-19. Yet, its potential impact will be determined by the ability to rapidly and equitably distribute billions of doses globally. This is an unprecedented challenge in biomanufacturing. In this project, researchers will build data-driven statistical models to evaluate tradeoffs in scaling the manufacture and supply of vaccine candidates. Questions include how much production capacity will need to be added, the impact of centralized versus distributed operations, and how to design strategies for fair vaccine distribution. The goal is to give decision makers the evidence needed to cost-effectively achieve global access.
Saving lives while restarting the U.S. economy
Daron Acemoglu, Simon Johnson, Asu Ozdaglar (MIT) Some states are reopening for business even as questions remain about how to protect those most vulnerable to the coronavirus. In this project, researchers will model the effects of targeted lockdowns on the economy and public health. In a recent working paper co-authored by Acemoglu, Victor Chernozhukov, Ivan Werning and Michael Whinston, MIT economists analyzed the relative risk of infection, hospitalization and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.
Which materials make the best face masks?
Lydia Bourouiba (MIT) Seven states have ordered residents to wear face masks in public to limit the spread of coronavirus. But apart from the coveted N95 mask, the effectiveness of many masks remains unclear due to a lack of standardized methods to evaluate them. In this project, researchers will develop methods to measure how well homemade and medical-grade masks do at blocking the tiny droplets of saliva and mucus expelled during normal breathing, coughs or sneezes. The researchers will test materials alone and together, and in a variety of configurations and environmental conditions, to see how well materials protect mask wearers and those around them.
A return to normalcy via lockdowns, treatments and mass testing
Dimitris Bertsimas (MIT) In a few short months, Covid-19 has devastated towns and cities around the world. Researchers are now piecing together the data to understand how policies can limit new infections and deaths and protect the most vulnerable. In this project, researchers will study the effects of targeted lockdowns to reduce new infections, hospital admissions and deaths. In a second phase of the project, they will develop machine learning models to predict how vulnerable a given patient is to Covid-19, and what personalized treatments might work best. They will also develop an inexpensive, spectroscopy-based test for Covid-19 that can deliver results in minutes and pave the way for mass testing.
Leveraging electronic medical records to find Covid-19 therapeutics
Dr. Stan Finkelstein, Roy Welsch (MIT) Developed as a treatment for Ebola, the anti-viral drug remdesivir is now in clinical trials in the United States as a treatment for Covid-19. Similar efforts to repurpose already-approved drugs to treat or prevent the disease are underway. In this project, researchers will use statistics, machine learning, and simulated clinical drug trials to find and test already-approved drugs as potential therapeutics against Covid-19. Researchers will sift through millions of electronic health records and medical claims for signals indicating that drugs used to fight chronic conditions like hypertension, diabetes and gastric influx might also work against Covid-19 and other diseases.
Treating Covid-19 with repurposed drugs
Rafael Gomez Bombarelli (MIT) As Covid-19’s global death toll mounts, researchers are racing to find a cure among already-approved drugs. Machine learning can expedite screening by letting researchers quickly predict if promising candidates can hit their target. In this project, researchers will represent molecules in three-dimensions to see if this added spatial information can help to identify drugs most likely to be effective against the disease. They will use NASA’s Ames and U.S. Department of Energy’s NSERC supercomputers to further speed the screening process.
Designing proteins to block the new coronavirus
Markus Buehler, Benedetto Marelli (MIT) Proteins are the basic building blocks of life, and with AI, researchers can explore and manipulate their structures to address long-standing problems. Take perishable food. The MIT-IBM Watson AI Lab recently used AI to discover that a silk protein made by honeybees could double as a coating for quick-to-rot foods to extend their shelf life. In this project, researchers will enlist the protein-folding method used in their honeybee-silk discovery to try and defeat the new coronavirus. Their goal is to design proteins able to block the virus from binding to human cells, and to synthesize and test their unique protein creations in the lab. (Image: Markus Buehler)
An app to track declining brain function
Thomas Heldt and Vivienne Sze (MIT) MIT researchers are developing low-cost tools to identify and track Alzheimer’s and other neurodegenerative diseases using a simple mobile-phone app. As patients play an eye-tracking game on their phone, the camera records how quickly and accurately their eyes respond to prompts on the screen. The resulting data can tell researchers how well the patient’s brain is functioning. The app, and the software being developed to crunch the data, could provide a way to track disease progression in patients with Alzheimer's. It could also be used as an adjunct to clinical drug trials by making it easier to track improvements over time.
Developing a better hearing aid
Josh McDermott (MIT) As we get older, many of us struggle to understand what’s being said in noisy environments. The goal of this project is to build algorithms for audio enhancement that will help humans hear in challenging listening conditions. The key idea is to leverage models of the auditory system to create algorithms that enhance the model’s performance, and to see if those improvements can be transferred to human listeners. Here, an algorithm models how the auditory nerve responds to sound. (Image: Mark Saddler)
A privacy-first approach to automated contact tracing
Ronald Rivest, Daniel Weitzner (MIT) Smartphone data can help limit the spread of Covid-19 by identifying people who have come into contact with someone infected with the virus, and thus may have caught the infection themselves. But automated contact tracing also carries serious privacy risks. In collaboration with MIT Lincoln Laboratory, MIT researchers will use encrypted Bluetooth data to ensure personally identifiable information remains anonymous and secure. (Image: Christine Daniloff, MIT)
Early detection of sepsis in Covid-19 patients
Daniela Rus (MIT) Sepsis is a deadly complication of Covid-19, the disease caused by the new coronavirus. About 10 percent of Covid-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive. Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival. Early detection can also help hospitals prioritize ICU resources for their sickest patients. In this project, researchers will develop a machine learning system to analyze images of patients’ white blood cells for signs of an activated immune response against sepsis. Here, a white blood cell mounts an attack against malaria (Image: Koch Institute + Ragon Institute of MGH, MIT and Harvard)
A cancer vaccine that kick-starts the immune system
Robert Langer, Ameya Kirtane, Daniel Reker, Giovanni Traverso (MIT) Vaccines have all but eliminated viruses like smallpox, polio and rubella. Could cancer be next? MIT researchers are using machine learning to search through a database of 10 million nanoparticles to find particles capable of activating specific immune cells to fight skin cancer. The team will test promising nanoparticles on cells in the lab, use the data to refine their model, and if successful, extend their work to other types of cancer.
Environmental tracking on-the-fly
Hari Balakrishnan, Mohammed Alizadeh, Hamsa Balakrishnan, Kristin Bergmann, Samuel Madden, Nick Roy, Vinod Vaikuntanathan (MIT) The falling cost of consumer drones has allowed scientists to track erupting volcanoes, receding glaciers and other processes too difficult or dangerous to observe in person. To expand these efforts, MIT researchers are developing a platform to allow hundreds of drones to gather information at closer range than satellites or instruments on land and sea can capture. The platform would include incentives for owners to lend their drones to a mission, and tools to coordinate the fleet's work, interpret collected data and protect personal privacy rights. Applications include monitoring of air pollution, sea-level rise, wildfires, and bridge and building defects.
A model to learn all the world’s languages
Roger Levy, Regina Barzilay and David Pesetsky (MIT) To native English speakers, Swahili sounds completely different from Quechua, but research shows that most languages share common properties. That may explain why humans learn language so easily, acquiring new words and concepts from context, while deep learning models require mountains of training data. The need for so much data leaves voice recognition and translation software beyond reach for thousands of languages that are spoken globally, but are not yet in machine-readable form. Researchers are developing a machine-learning framework to reveal the biases that let children learn language so quickly and to also improve and extend language-learning models to thousands of data-scarce languages in the world. (Jose-Luis Olivares/MIT)
A robot coach that listens and responds
Rosalind Picard and Cynthia Breazeal (MIT) Depression and other mood disorders are still diagnosed and tracked with information that patients give to their doctors, complicating efforts to deliver more personalized therapy. In this pilot project, MIT researchers are developing an emotional-wellness coach that can provide a daily shot of individualized attention and support. Subjects recruited to the study will grade their coach's ability to provide timely and effective advice. The study will also examine whether robot coaches offer better emotional support than state-of-the-art mobile apps already in use.
Taking the guesswork out of farming
Danielle Wood and Neil Gaikwad (MIT) Farming has always been a stressful, unpredictable occupation, but misguided government policies and increasing drought have devastated many family farms and caused suicide rates to skyrocket. In response, MIT researchers are developing an AI framework to make small-scale farming and agriculture markets more efficient. Initiated and led by graduate student Neil Gaikwad, the project will help farmers in the U.S. and India predict how much corn and cotton to plant, and when, and give them market information to make better decisions. The framework is also meant to help policymakers fairly allocate shared resources like water and electricity.
Understanding real-world actions as they unfold
Aude Oliva (MIT) The brain has a remarkable ability to size up a scene and quickly understand what’s going on. MIT-IBM researchers are training machines to do something similar with a dataset of 1 million short video clips called Moments in Time. The models learn to recognize what’s happening in any particular frame, whether that’s pandas playing or robots dancing or a poodle jumping for joy. As AI systems learn to understand the gist of dynamic scenes, the hope is that this knowledge can be transferred to other domains.
Designing a robot with common sense
Leslie Kaelbling, Tomas Lozano-Perez and Joshua Tenenbaum (MIT) A robot that can break down high-level tasks and run for weeks without getting stuck is still a long way from being built. But MIT researchers hope to crack the problem by applying what they know about computers and the human brain. They are currently building an experimental infrastructure that will allow computer simulators and eventually, real robots, to perceive and interact with the world around them, and ultimately achieve a semblance of common sense.
Modeling shifts in human attention with time
Aude Oliva (MIT); Zoya Bylinskii (Adobe Research) How long we look at an image determines what we see. A split-second glance at a photo may be just enough to recognize the dominant subject while a longer look may reveal important details that change our interpretation of what’s happening. In this project, researchers will conduct online experiments to see how viewers’ attention shifts the longer they gaze at an image. They will also attempt to simulate this shifting gaze in a deep learning model. Applications for the work include ways to automatically crop, caption and render images for different viewing durations.
Training and testing AI systems in a 3D world like our own
Jim DiCarlo, Josh McDermott, Joshua Tenenbaum (MIT); Dan Gutfreund (IBM) Modern AI systems are trained on large labeled datasets to “see” and “hear” in a process that bears little resemblance to real-world learning. To create more realistic scenarios, and drive further advances in AI, researchers have created a virtual environment, ThreeDWorld, with cutting edge videogame technology. Through a range of tasks in this simulated world, researchers will attempt to train AI systems to perceive physical structures and events in the world with the ultimate goal of endowing them with more human-like perceptual intelligence. (Image: Jeremy Schwartz)
Finding efficient ‘lottery tickets’ for deep learning
Michael Carbin and Jonathan Frankle (MIT) Under the Lottery Ticket Hypothesis proposed by MIT researchers, deep neural network models contain much smaller subnetworks that can be isolated and trained to full accuracy. If the right subnetwork is found early in training, the model can perform image classification and other tasks with 90 percent fewer connections. In this project, researchers will extend the lottery ticket idea to other settings, and explore ways to further reduce the computational expense of training deep learning models.
Lifelong learning for distributed intelligence
Jonathan How (MIT); Matthew Riemer, Gerald Tesauro (IBM) In an increasingly automated future, humans and robots will likely work together in fast-paced, changing environments. In this project, researchers will address some of the key challenges of representing new skills and new models of the environment in a robot agent, sharing and combining this information with other robots on the team, and enabling robots to respond rapidly to new scenarios. The challenge includes creating principled methods for robots to communicate with humans and other robots in ways that are easily understood by humans, and quickly adapt their behaviors and learn to solve tasks in new environments using minimal data.
Probabilistic programming for reliable, reactive AI models
Michael Carbin (MIT); Guillaume Baudart, Martin Hirzel, Louis Mandel (IBM) More companies are integrating state-of-the-art AI models for computer vision and natural-language conversation into their workflows and products. Unfortunately, these applications are neither as reliable nor as responsive to dynamic changes in observed data as they should be. But probabilistic programming, an approach that uses Bayesian statistics to model uncertainty in data and AI models, shows promise. In this project, researchers will build a probabilistic programming language aimed at creating robust, reactive models with native support for online monitoring and learning.
Returning the power of personal data to the people
Kalyan Veeramachaneni (MIT) The AI revolution has been partly fed by a glut of free, personal data streaming from smartphones and other devices. In exchange for free search, email and other services, consumers have been quick to let companies and governments control and store their data despite the well-documented privacy and security risks. In this project, researchers will explore ways for people to hold on to their data, to keep it private and safe, but to also build predictive models of their own. This would allow ordinary people to reap the economic benefits of the data they generate in day-to-day life.
Machine learning for the quantum world
Aram Harrow, Peter Shor (MIT); Sergey Bravyi, David Gosset, Kristan Temme (IBM) Classical computers and algorithms have surpassed humans at some tasks. If the power of quantum computing is added to the mix, the next revolution in AI could be even more profound. But the transition won’t be easy. Running and storing calculations in bits, as 1s and 0s, is fundamentally different from computing in qubits, which can be 1 and 0 at once. In this project, researchers will explore whether current quantum technology can overcome the limitations of classical algorithms, and whether classical algorithms can advance quantum information science. The work will address questions about classical versus quantum representations of data, and our ability to use quantum protocols to process and store it.
Fair and accurate algorithmic decision-making
Michiel Bakker, Alejandro Noriega, Alex "Sandy" Pentland (MIT); Kush Varshney (IBM) AI can help society make better, more informed decisions, but without proper controls, algorithmic decision-making can perpetuate and worsen existing biases and inequities. In this project, researchers will develop methods to check AI systems for bias by analyzing each stage of the pipeline — data gathering, inference and decision-making – at once. A more holistic approach will lead to systems that prioritize fairness, accuracy, efficiency and privacy, they argue. The researchers will test their approach on models for pricing risk in insurance and targeting social services to those who need them most.
Interpreting dynamic physical systems via sensor data
Duane Boning (MIT); Jayant Kalagnaman, Kyongmin Yeo (IBM) The rise of the Internet-of-Things, or networks of connected, sensor-embedded devices, is generating a tsunami of high-quality data about the physical world. Delivered in near-real time, sensor data is a goldmine for scientists and engineers trying to make better observations of natural and manufactured systems. In this project, researchers will develop algorithms to overcome challenges of working with this dynamic, time-varying type of data. The outcome of this research, a set of model-free simulations of dynamical systems, could be used for decision support in a variety of industries, from environmental monitoring to cognitive manufacturing.
A better, faster way for doctors to query patient medical records
Peter Szolovits (MIT); Preethi Raghavan (IBM) The rise of electronic health records has put the full medical history of patients at a doctor’s fingertips, but AI systems still struggle to understand and retrieve the bits of information a doctor may need at a given moment. Much of the problem comes down to the many ways that a single question can be phrased. To build a better automated question-answering system, researchers will focus on leveraging contextual cues in a doctor’s question, and integrating data from a patient’s timeline, to pinpoint exactly what information the doctor is looking for.
Designing novel nanomaterials with the help of AI
Steven Johnson, Giuseppe Romano, Raul Radovitzky (MIT); Payel Das, Youssef Mroueh (IBM) Designing nanostructured materials is challenging because of the large number of variables needed to describe all possible manufacturable geometries. Predicting a physical response also often involves solving complicated partial differential equations, hindering efficient materials optimization. In this project, researchers will develop deep learning models and active-learning techniques to evaluate nanomaterials under physical constraints. By integrating machine learning and scientific computing, this approach will speed the discovery of new nanomaterials with applications in optical lensing, harvesting of waste heat and building fracture-resistant airplanes.
A deep dive on Lou Gehrig’s disease, or ALS, using multimodal data
Ernest Fraenkel (MIT); Soumya Ghosh, Kenney Ng (IBM) Amyotrophic lateral sclerosis, or ALS, is a progressive neurological disease that some researchers believe is actually a cluster of related diseases. If understood, they argue, these ALS subtypes could be targeted with customized treatments as some forms of cancer are. In this project, researchers will use machine learning to identify the molecular basis of ALS in its potentially multiple forms. They will leverage the molecular, clinical and behavioral data gathered by the Answer ALS consortium, to which they belong, to develop and apply novel machine learning methods to gain new insights into the disease.
Causal modeling to identify Lou Gehrig’s disease, or ALS, in its multiple forms
Guy Bresler (MIT); Karthikeyan Shanmugam, Dmitriy Katz-Rogozhnikov (IBM) Amyotrophic lateral sclerosis, or ALS, is a disease that attacks nerve cells in the brain and spinal cord, progressively limiting patients’ ability to speak, move, talk and breathe. In collaboration with Ernest Fraenkel’s lab at MIT, researchers will conduct experiments on the motor neurons of ALS patients and develop algorithms to draw causal inferences about the data. Through causal modeling they hope to show that ALS is several distinct diseases that are best targeted with equally distinct treatments.
Shrinking the environmental footprint of concrete
Stefanie Jegelka, Elsa Olivetti (MIT); Richard Goodwin, Nghia Hoang (IBM) Concrete is the most common building material on Earth, with current production methods contributing a major share of global carbon emissions. In this project, researchers will leverage AI to find more sustainable, high-performing and cost-competitive concrete mixtures. They will use text analysis tools to explore the scientific literature for concrete mixture designs and use generative modeling to test different approaches to achieve greater environmental performance. They will also investigate use of waste materials in alternative concrete binders to further reduce concrete’s carbon footprint.
Visualizing an AI model’s blind spots
Antonio Torralba, David Bau (MIT); Hendrik Strobelt (IBM) Generative adversarial networks, or GANs, have a knack for picking out patterns in a series of images. But they also omit important details. In reconstructing a scene from photos it has studied, a GAN may systematically leave out people, cars and signs, even when those items feature prominently in the training data. In this project, researchers will decompose AI-generated images to understand what the model does and doesn't know about the physical world. The work is part of a larger effort to make AI systems more reliable and explainable.
Distributed AI-based planning for tracking objects in a multi-agent system
Moe Win (MIT); Subhro Das (IBM) The ability to manage a team of robot agents, like a fleet of drones or self-driving cars, depends on efficient policy-learning methods that train the agents to cooperate and coordinate their actions. In this project, researchers will explore the use of reinforcement learning algorithms to control multiple agents in a network. To achieve the right combination of policies, the researchers will investigate policy-learning methods that reach a joint value function by consensus rather than by sharing local observations among agents.
Fusing models with optimal transport to build more versatile AI systems
Justin Solomon (MIT); Mikhail Yurochkin (IBM) Modern AI models excel at specialized tasks, but fail when asked to learn too much, especially when different types of data are thrown at them. In this project, researchers will use optimal transport, a set of principles borrowed from geometry and probability theory, to fuse together a variety of specialized models to build an AI system with greater versatility. The work has applications in privacy-preserving federated learning, transfer learning, multi-modality inference and natural language processing (including the topic modeling example at left) .
Learning and planning in hybrid domains
Leslie Pack Kaelbling (MIT); Michael Katz (IBM) Policies and value functions are one way to train an intelligent agent, but such methods are data intensive, bad at explaining their decisions and prone to failure in dynamic, complex environments. In this project, researchers will develop efficient, model-based planning methods for domains that require both discrete choices of what actions to take and numerical choices of precisely how to carry them out. Their proposed methods will need fewer data, generalize across domains, acquire knowledge cumulatively and provide explanations for their decisions.
Toward nimble AI devices
Jeehwan Kim (MIT); Seyoung Kim (IBM) Brain-inspired algorithms have helped to drive AI’s stunning progress. Researchers hope to push the field even further by building next-generation hardware that similarly mimics how the brain transfers information. In this project, researchers will explore new chip designs based on memristors, or memory transistors, that can transmit and store data locally. Their goal is to give mobile devices supercomputing speeds via memristors-based neural networks. (Image: Peng Lin)
Addressing AI’s trust problem
Ilaria Liccardi (MIT) AI systems are embedded in daily life, but are increasingly viewed as untrustworthy. As governments move to regulate data collection, analysis and use, attention has shifted to measures for restoring public confidence in AI. This project aims to understand the mechanisms and data that people need to trust AI-generated predictions. Researchers will identify the types of explanations that can identify biased training data and incorrect results. One goal is to provide guidelines to companies and policymakers for writing and implementing new AI regulations. (Image: MIT Sloan Management Review)
Toward interpretable, efficient deep learning models
Yury Polyanskiy (MIT); Brian Kingsbury (IBM) Deep neural networks excel at finding patterns in massive amounts of data but their decisions are notoriously opaque. In this project, researchers will trace the transformation of data into crude representations in a network’s shallow layers to the tightly clustered, interpretable representations in deeper layers. To give designers more control over model design, researchers will build tools, rooted in information theory, for measuring the complexity of intermediate representations. Ultimately they hope to eliminate the trial-and-error in designing efficient models.
Leveraging neuro-symbolic AI to find a malware needle in a haystack of code
Una-May O'Reilly (MIT) Continuing their earlier work on malware detectors, researchers will tackle so-called code malware which come disguised as instructions, command lines, and scripts unknowingly executed by users. Code malware is especially pernicious because of evasion techniques and other tactics attackers use to cover their tracks. Here, researchers will combine deep learning algorithms and symbolic programs to query a code base to identify suspicious lines of code. In addition to cybersecurity applications, this work has relevance for automatic program analysis and data-driven software engineering.
An explainable, financial forecasting framework for investment research
Rahul Mazumder (MIT); Pin-Yu Chen, Yang Zhang, Yada Zhu (IBM) Traditional financial tools struggle to make sense of the diverse data that now bombard us. To harness this information, next-generation AI techniques are needed to enable reliable forecasting of time-varying financial indicators. This project explores the use of external knowledge graphs describing supply chain links, investment deals and corporate decisions and peer-to-peer networks to create an AI-framework that is explainable, scalable and robust.
Combining statistical and symbolic AI for a more human-like understanding of language
Roger Levy (MIT); Ramon Astudillo (IBM) Under the surface of speech and text lies a a rich symbolic structure that helps us understand what we hear and read. Statistical deep learning models have led to remarkable progress on a range of prediction problems, but require massive datasets, are susceptible to chance associations, and have difficulties capturing the compositionality of human language. In this project, researchers develop and test hybrid models that blend the strengths of the two approaches. The symbolic base provides a strong inductive bias and interpretable structural representations; neural networks are used for flexible pattern recognition over these structures to produce more reliable results.
Advancing graph deep learning to analyze relationships at scale
Charles E. Leiserson (MIT); Jie Chen, Toyotaro Suzumura (IBM) The powerful recommendation algorithms guiding us to new products, friends and webpages are based on graph algorithms that analyze complex relationships among millions of data points. Combined with deep neural networks, which excel at picking out patterns in images and sequence data, graph algorithms have the potential to predict market behavior and much more. In this project, researchers will address the computational challenges of graph neural networks while making their decisions more explainable.
David Karger (MIT); Dakuo Wang (IBM) Computer-designed algorithms have given non-specialists greater access to time-saving tools, but keeping a human in the loop remains critical as automated machine learning methods, or AutoML, evolve. In this project, researchers will examine how to add new capabilities to the AutoML pipeline while ensuring that users understand how to use and adapt tools to the task at hand. The work will include an end-to-end prototype system to evaluate algorithms across various domains.
Making forecasts from limited, real-time data
Munther Dahleh, Mardavij Roozbehani (MIT); Mark Squillante, Jakub Marecek (IBM) The dream of a smart city, with mobile ‘edge’ devices communicating in real-time, appears finally within reach. The two-way flow of information promises new levels of efficiency for industry, transportation, government and healthcare. But technical challenges remain. Gathering, processing and storing small-scale data in multiple places at once introduces levels of complexity that compound errors. In this project, researchers will build on theories of dynamical systems approximation, optimization, and statistical learning to adapt AI models to this new world of connected devices.
A better way to bore tunnels beneath the big city
Herbert Einstein (MIT); Chandra Reddy (IBM) Diverting trains and roads beneath big cities is one way of moving people around more quickly. But boring through the sub-surface is both complicated and costly. AI promises to simplify the process by helping engineers combine information from lower-resolution geological surveys with more detailed data collected directly from machines as they bore their way underground. In this project, researchers will use boring data from the construction of Portugal’s Porto Metro system to develop a construction-strategy decision model. Eventually, the model will combine geological data from a given site with boring data gathered during construction.
Faster, cheaper AI with ‘neuromorphic’ chips that work like neurons in the brain
Bilge Yildiz, Ju Li, Jesus del Alamo (MIT) AI software has made stunning progress with the popularization of deep neural networks. Hardware innovation has lagged by comparison, with most chips still processing electrical signals digitally, in the binary logic of 1s and 0s, and off and on switches. The brain, by contrast, processes signals in an analog fashion, in bursts of varying intensity, which uses far less energy. The ability to recreate this process in a brain-like chip depends on the ability to precisely control electrical signals. In this project, researchers are exploring the use of ion intercalation to achieve multiple resistance states in the chip’s channel material, tungsten oxide, (at left) while using as little energy as the brain.
Learning to map words to images via instructional videos
James Glass (MIT); Dhirah Joshi (IBM) YouTube’s massive trove of videos gives researchers a rich source of data to leverage in training AI models to recognize how objects and activities shown in videos correspond to their spoken word representations. In this project, researchers will use cooking shows and other instructional videos to train a deep learning model to learn to associate objects and actions like “hot dog” and “cutting” with their raw speech representations. Applications for the work include indexing the mountains of audio and video surfacing online daily.
Developing optimization algorithms for better predictions
Ali Jadbabaie, Asu Ozdaglar (MIT); Subhro Das (IBM) In training a deep learning model, a set of observations or examples are provided alongside a set of outputs. Eventually, the model learns to map inputs to outputs, and to identify, say, the animal in the photo as a cat or dog without any prompting. Much of the underlying computation hinges on optimization, or minimizing or maximizing a given variable. ‘Adaptive’ optimization algorithms like gradient clipping, RMSProp and ADAM have helped pave the way for high-performing deep learning models. In this project, researchers will explain, using theory and experimental data, why these algorithms and methods produce models with stronger predictions.
Training AI models to learn with their ‘eyes’ and ‘ears’
Antonio Torralba (MIT); Chuang Gan (IBM) In learning how to recognize faces and voices, AI models train on photos and voice recordings, but rarely both. Yet together the data provide a fuller sense of the world around us. To teach machines to make use of this data and learn more like us, researchers are exploiting video, where images synchronized with sound offer a rich simulation of the real world. In a series of projects, researchers are developing AI systems that can separate multiple sound sources and associate them with individual pixels.
Toward an AI system that understands cause and effect
Joshua Tenenbaum, Antonio Torralba (MIT); Chuang Gan (IBM) Deep learning models have delivered remarkable advances in AI, from image classification to speech recognition. But most of their success comes down to basic pattern matching. To push the capabilities of AI systems, researchers have created a physical world made up of colliding objects and pairs of questions and answers that can learn why things happen. The dataset, called CLEVRER, is aimed at testing AI models in their understanding of how objects relate to and influence one another. To beat the test, the researchers have developed a hybrid AI model that combines statistical deep learning with the interpretability of symbolic programs.
Predictive modeling for non-AI specialists
Tim Kraska (MIT); Horst Samulowitz (IBM) A small business like a coffee shop or a bookstore might benefit more from sales-predicting software than companies with large cash flows. But most AI software is currently inaccessible to users with limited expertise in data cleaning, picking models for specific tasks, and evaluating their results. Here, researchers propose an end-to-end solution that builds on their drag-and-drop Northstar system, and works for both general users and expert data scientists. (Image: MIT News)
Shrinking AI to run on the internet-of-things
Song Han (MIT) For AI applications to move to smartphones and other devices, deep learning models need to get smaller so that they perform fewer computations, use less energy, and process and store fewer data. To get there, researchers are using AI, or AutoML, to make better AI. In this project, researchers will develop automated techniques for building tiny, efficient models that work on a range of edge devices, and can efficiently analyze video, 3D point cloud data and language. Their broad goal is to reduce the computation and engineering costs of AI products.
Finding better ways to treat Covid-19 patients on ventilators
Li-Wei Lehman, Roger Mark (MIT); Zach Shahn, Daby Sow (IBM) Troubled breathing from acute respiratory distress syndrome is one complication from Covid-19 that sends patients to the ICU. There, life-saving machines help patients breathe by mechanically pumping oxygen into the lungs. But even as towns and cities lower new infections through social distancing, there remains a national shortage of mechanical ventilators and serious health risks of ventilation itself. In this project, researchers will develop an AI tool to help doctors find better ventilator settings for Covid-19 patients and decide how long to keep them on a machine. Shortened ventilator use can limit lung damage while freeing up machines for others.
Allocating Resources in the Face of Uncertainty
David Simchi-Levi (MIT); Mark Squillante (IBM) One mark of a successful business is its ability to predict customer demand for products and services. But too often, customer preferences and request patterns are unknown or only partly known. Some can be learned, but others are highly uncertain. In this project, researchers aim to solve foundational problems in online dynamic resource allocation, in part by developing new algorithms that combine learning and risk-hedging in the face of uncertainty.
Toward computer vision models that recognize real-world objects
Boris Katz, Andrei Barbu (MIT); Dan Gutfreund (IBM) Object detectors that perform well on popular benchmarks too often fail in the real world. To kickstart the next revolution in object recognition, researchers have launched ObjectNet, a dataset of photos that controls for bias by capturing objects at odd angles, in unusual positions, and with unexpected backgrounds. Researchers will test how well humans do at ObjectNet to understand what new features to build into object-detection models, and how to fairly evaluate their performance.
An AI visualization tool that anyone can use
Arvind Satyanarayan (MIT); Hendrik Strobelt (IBM) The emerging field of explainable AI is creating badly needed tools to peer inside black-box models to understand how they make their decisions. But many of these visualization tools are inaccessible to the average user. The goal of this project is to build an interpretability interface that lets users intuitively explore how an AI model works. The design rests on two key principles: users can manipulate items in their dataset to understand the model’s learned representations and access the model’s hidden layers to make changes.
Pulkit Agrawal (MIT); Matthew Riemer, Tim Klinger (IBM) Through a style of trial-and-error learning called reinforcement learning, computers can now beat humans at chess, Go and a growing list of video games. So why hasn’t reinforcement learning caught on in health care, weather and climate prediction, and other areas with potential for high impact? The truth is reinforcement learning systems require hundreds of millions of interactions to become expert at one task, learning to solve each task from scratch rather than reuse past knowledge. The goal of this project is to leverage previous experience to solve new and more complex tasks by developing a framework that lets agents reuse prior information and by writing new algorithms that can transfer past knowledge without forgetting how to perform earlier tasks.
Developing policy-aware explanations for AI
Hal Abelson, David Edelman, Gerald Sussman, Daniel Weitzner (MIT); Michael Hind, Kush Varshney, Ian Molloy, JR Rao (IBM) Autonomous systems and the data they depend on to learn are increasingly putting laws and policies to the test. For AI to progress and achieve its promise, a consensus is emerging that AI systems will need to become more open and accountable to regulators and the public to ensure that the systems are fair, accurate, and unable to be tricked or misled. In dialogue with global policymakers, MIT-IBM researchers are developing technology guarantees in areas like consumer finance and transportation where AI has the potential to do the greatest good.
Developing cryptographic tools to keep data private and secure
Vinod Vaikuntanathan, Shafi Goldwasser (MIT); Fabrice Benhamouda, Tal Rabin (IBM) Machine learning and cryptography are flip sides of the same coin: one turns unstructured data into algorithms while the other hides the structure within data and algorithms. MIT-IBM researchers are exploiting these complementary traits to develop stronger cryptographic tools to keep sensitive data secure, as the health care, finance, and insurance industries, among others, handle more personal data. The researchers' goal is to build privacy protections into machine learning algorithms and make them less vulnerable to adversarial attacks.
Identifying patients at high risk for cardiovascular death
Collin Stultz(MIT) and Kenney Ng (IBM) More cardiac patients could be saved each year if doctors could catch high-risk patients earlier and give them more aggressive treatment. Using machine learning tools to analyze patient medical records, MIT-IBM researchers have discovered 11 new features that indicate patients face a higher risk of cardiovascular death — from treatments received at the hospital to whether they're taking the blood-thinner Warfarin. When the 11 features are considered with the patient’s age, systolic blood pressure, and other standard metrics, the ability to predict high-risk patients goes up significantly, researchers say.
Can deep learning models be trusted?
Luca Daniel (MIT), Pin-Yu Chen (IBM) As AI systems automate more tasks, the need to quantify their vulnerability and alert the public to possible failures has taken on new urgency, especially in safety-critical applications like self-driving cars and fairness-critical applications like hiring and lending. To address the problem, MIT-IBM researchers are developing a method that reports how much each individual input can be altered before the neural network makes a mistake, on their own or through a malicious attack. The team is now expanding the framework to larger, and more general neural networks, and developing tools to quantify their level of vulnerability based on many different ways of measuring input-alteration.
Debugging neural networks
Antonio Torralba and Aude Oliva (MIT) Deep learning systems are responsible for much of the recent breakthroughs in artificial intelligence, but for progress to continue they will need to do a better job of explaining themselves. MIT-IBM researchers are developing visualization tools to do just that, allowing software developers to find and fix mistakes and ward off malicious attacks. The tools will allow developers to root out bugs in neural network nodes much as they do now in lines of code.
For example, if the network confuses a construction scene with a street bazaar, the tools pinpoint the set of nodes that produced the mistake. In this case, the network incorrectly interpreted the street as a sidewalk, and the construction site as a sales booth. The mistakes would be fixed by retraining these particular network nodes.
Preventing food spoilage to feed more people
Markus Buehler and Benedetto Marelli (MIT) Spoiled fruits and vegetables make up a large share of the food that goes to waste globally. What if some of it could be saved? MIT-IBM researchers are experimenting with AI to extend the life of perishable food by designing new structural biopolymers to serve as edible fruit and vegetable coatings. They are using machine learning tools to analyze the amino acid sequences that make a biopolymer edible, nontoxic and stable. They will then model the shape of their predicted biopolymers to see how their properties change. The researchers will synthesize the best biopolymer candidates in a lab to validate their predictions.
Fighting the opioid epidemic
David Sontag (MIT); Dennis Wei and Kush Varshney (IBM) More than 115 people in the United States die each day after overdosing on opioids. The type of opioid, how much was prescribed, and for how long, are all factors in who succumbs to addiction. That has led to a focus among public health officials to develop tools that can improve how pain-killers are prescribed. MIT-IBM Watson AI Lab researchers are applying machine learning tools to medical insurance-claim records to understand what kinds of medical histories and prescription practices raise red flags. Their goal is to develop a model that can help doctors tailor prescriptions to individual patients to minimize addiction risk.
A human-in-the loop system for automated moral reasoning
Iyad Rahwan (MIT), Francesca Rossi (IBM) To test how ordinary people think about the ethical dilemmas raised by AI and self-driving cars, MIT researchers developed a Moral Machine platform that allowed volunteers to pick a preferred outcome in various life-threatening scenarios. The researchers found that regional variations played a major role in how people responded. In collaboration with IBM, the MIT researchers are now building models with their experimental data to understand how people and machines can communicate and reach consensus in morally-charged situations. The research is an attempt to bring a computational approach to the ethical questions raised by AI.