Arvind Satyanarayan (MIT); Hendrik Strobelt (IBM) The emerging field of explainable AI is creating badly needed tools to peer inside black-box models to understand how they make their decisions. But many of these visualization tools are inaccessible to the average user. The goal of this project is to build an interpretability interface that lets users intuitively explore how an AI model works. The design rests on two key principles: users can manipulate items in their dataset to understand the model’s learned representations and access the model’s hidden layers to make changes.
Pulkit Agrawal (MIT); Matthew Riemer, Tim Klinger (IBM) Through a style of trial-and-error learning called reinforcement learning, computers can now beat humans at chess, Go and a growing list of video games. So why hasn’t reinforcement learning caught on in health care, weather and climate prediction, and other areas with potential for high impact? The truth is reinforcement learning systems require hundreds of millions of interactions to become expert at one task, learning to solve each task from scratch rather than reuse past knowledge. The goal of this project is to leverage previous experience to solve new and more complex tasks by developing a framework that lets agents reuse prior information and by writing new algorithms that can transfer past knowledge without forgetting how to perform earlier tasks.
A Cancer Vaccine that Kick-starts the Immune System
Robert Langer, Ameya Kirtane, Daniel Reker, Giovanni Traverso (MIT) Vaccines have all but eliminated viruses like smallpox, polio and rubella. Could cancer be next? MIT researchers are using machine learning to search through a database of 10 million nanoparticles to find particles capable of activating specific immune cells to fight skin cancer. The team will test promising nanoparticles on cells in the lab, use the data to refine their model, and if successful, extend their work to other types of cancer.
Environmental Tracking On-the-Fly
Hari Balakrishnan, Mohammed Alizadeh, Hamsa Balakrishnan, Kristin Bergmann, Samuel Madden, Nick Roy, Vinod Vaikuntanathan (MIT) The falling cost of consumer drones has allowed scientists to track erupting volcanoes, receding glaciers and other processes too difficult or dangerous to observe in person. To expand these efforts, MIT researchers are developing a platform to allow hundreds of drones to gather information at closer range than satellites or instruments on land and sea can capture. The platform would include incentives for owners to lend their drones to a mission, and tools to coordinate the fleet's work, interpret collected data and protect personal privacy rights. Applications include monitoring of air pollution, sea-level rise, wildfires, and bridge and building defects.
Developing Policy-Aware Explanations for AI
Hal Abelson, David Edelman, Gerald Sussman, Daniel Weitzner (MIT); Michael Hind, Kush Varshney, Ian Molloy, JR Rao (IBM) Autonomous systems and the data they depend on to learn are increasingly putting laws and policies to the test. For AI to progress and achieve its promise, a consensus is emerging that AI systems will need to become more open and accountable to regulators and the public to ensure that the systems are fair, accurate, and unable to be tricked or misled. In dialogue with global policymakers, MIT-IBM researchers are developing technology guarantees in areas like consumer finance and transportation where AI has the potential to do the greatest good.
A Model to Learn All the World’s Languages
Roger Levy, Regina Barzilay and David Pesetsky (MIT) To native English speakers, Swahili sounds completely different from Quechua, but research shows that most languages share common properties. That may explain why humans learn language so easily, acquiring new words and concepts from context, while deep learning models require mountains of training data. The need for so much data leaves voice recognition and translation software beyond reach for thousands of languages that are spoken globally, but are not yet in machine-readable form. Researchers are developing a machine-learning framework to reveal the biases that let children learn language so quickly and to also improve and extend language-learning models to thousands of data-scarce languages in the world.
A Robot Coach that Listens and Responds
Rosalind Picard and Cynthia Breazeal (MIT) Depression and other mood disorders are still diagnosed and tracked with information that patients give to their doctors, complicating efforts to deliver more personalized therapy. In this pilot project, MIT researchers are developing an emotional-wellness coach that can provide a daily shot of individualized attention and support. Subjects recruited to the study will grade their coach's ability to provide timely and effective advice. The study will also examine whether robot coaches offer better emotional support than state-of-the-art mobile apps already in use.
Developing Cryptographic Tools to Keep Data Private and Secure
Vinod Vaikuntanathan, Shafi Goldwasser (MIT); Fabrice Benhamouda, Tal Rabin (IBM) Machine learning and cryptography are flip sides of the same coin: one turns unstructured data into algorithms while the other hides the structure within data and algorithms. MIT-IBM researchers are exploiting these complementary traits to develop stronger cryptographic tools to keep sensitive data secure, as the health care, finance, and insurance industries, among others, handle more personal data. The researchers' goal is to build privacy protections into machine learning algorithms and make them less vulnerable to adversarial attacks.
Taking the Guesswork out of Farming
Danielle Wood and Neil Gaikwad (MIT) Farming has always been a stressful, unpredictable occupation, but misguided government policies and increasing drought have devastated many family farms and caused suicide rates to skyrocket. In response, MIT researchers are developing an AI framework to make small-scale farming and agriculture markets more efficient. Initiated and led by graduate student Neil Gaikwad, the project will help farmers in the U.S. and India predict how much corn and cotton to plant, and when, and give them market information to make better decisions. The framework is also meant to help policymakers fairly allocate shared resources like water and electricity.
Understanding Real-World Actions as They Unfold
Aude Oliva (MIT) and Daniel Gutfreund (IBM) The brain has a remarkable ability to size up a scene and quickly understand what’s going on. MIT-IBM researchers are training machines to do something similar with a dataset of 1 million short video clips called Moments in Time. The models learn to recognize what’s happening in any particular frame, whether that’s pandas playing or robots dancing or a poodle jumping for joy. As AI systems learn to understand the gist of dynamic scenes, the hope is that this knowledge can be transferred to other domains.
Designing a Robot with Common Sense
Leslie Kaelbling, Tomas Lozano-Perez and Joshua Tenenbaum (MIT) A robot that can break down high-level tasks and run for weeks without getting stuck is still a long way from being built. But MIT researchers hope to crack the problem by applying what they know about computers and the human brain. They are currently building an experimental infrastructure that will allow computer simulators and eventually, real robots, to perceive and interact with the world around them, and ultimately achieve a semblance of common sense.
Identifying Patients at High Risk for Cardiovascular Death
Collin Stultz(MIT) and Kenney Ng (IBM) More cardiac patients could be saved each year if doctors could catch high-risk patients earlier and give them more aggressive treatment. Using machine learning tools to analyze patient medical records, MIT-IBM researchers have discovered 11 new features that indicate patients face a higher risk of cardiovascular death — from treatments received at the hospital to whether they're taking the blood-thinner Warfarin. When the 11 features are considered with the patient’s age, systolic blood pressure, and other standard metrics, the ability to predict high-risk patients goes up significantly, researchers say.
Can Deep Learning Models Be Trusted?
Luca Daniel (MIT), Pin-Yu Chen (IBM) As AI systems automate more tasks, the need to quantify their vulnerability and alert the public to possible failures has taken on new urgency, especially in safety-critical applications like self-driving cars and fairness-critical applications like hiring and lending. To address the problem, MIT-IBM researchers are developing a method that reports how much each individual input can be altered before the neural network makes a mistake, on their own or through a malicious attack. The team is now expanding the framework to larger, and more general neural networks, and developing tools to quantify their level of vulnerability based on many different ways of measuring input-alteration.
Debugging Neural Networks
Antonio Torralba and Stefanie Jegelka (MIT); Hendrik Strobelt (IBM) Deep learning systems are responsible for much of the recent breakthroughs in artificial intelligence, but for progress to continue they will need to do a better job of explaining themselves. MIT-IBM researchers are developing visualization tools to do just that, allowing software developers to find and fix mistakes and ward off malicious attacks. The tools will allow developers to root out bugs in neural network nodes much as they do now in lines of code.
For example, if the network confuses a construction scene with a street bazaar, the tools pinpoint the set of nodes that produced the mistake. In this case, the network incorrectly interpreted the street as a sidewalk, and the construction site as a sales booth. The mistakes would be fixed by retraining these particular network nodes.
Preventing Food Spoilage to Feed More People
Markus Buehler and Benedetto Marelli (MIT); Pin-Yu Chen and Lingfei Wu (IBM) Spoiled fruits and vegetables make up a large share of the food that goes to waste globally. What if some of it could be saved? MIT-IBM researchers are experimenting with AI to extend the life of perishable food by designing new structural biopolymers to serve as edible fruit and vegetable coatings. They are using machine learning tools to analyze the amino acid sequences that make a biopolymer edible, nontoxic and stable. They will then model the shape of their predicted biopolymers to see how their properties change. The researchers will synthesize the best biopolymer candidates in a lab to validate their predictions.
Fighting the Opioid Epidemic
David Sontag (MIT), Dennis Wei and Kush Varshney (IBM) More than 115 people in the United States die each day after overdosing on opioids. The type of opioid, how much was prescribed, and for how long, are all factors in who succumbs to addiction. That has led to a focus among public health officials to develop tools that can improve how pain-killers are prescribed. MIT-IBM Watson AI Lab researchers are applying machine learning tools to medical insurance-claim records to understand what kinds of medical histories and prescription practices raise red flags. Their goal is to develop a model that can help doctors tailor prescriptions to individual patients to minimize addiction risk.
A Human-in-the Loop System for Automated Moral Reasoning
Iyad Rahwan (MIT), Francesca Rossi (IBM) To test how ordinary people think about the ethical dilemmas raised by AI and self-driving cars, MIT researchers developed a Moral Machine platform that allowed volunteers to pick a preferred outcome in various life-threatening scenarios. The researchers found that regional variations played a major role in how people responded. In collaboration with IBM, the MIT researchers are now building models with their experimental data to understand how people and machines can communicate and reach consensus in morally-charged situations. The research is an attempt to bring a computational approach to the ethical questions raised by AI.
An App to Track Declining Brain Function
Thomas Heldt and Vivienne Sze (MIT) MIT researchers are developing low-cost tools to identify and track Alzheimer’s and other neurodegenerative diseases using a simple mobile-phone app. As patients play an eye-tracking game on their phone, the camera records how quickly and accurately their eyes respond to prompts on the screen. The resulting data can tell researchers how well the patient’s brain is functioning. The app, and the software being developed to crunch the data, could provide a way to track disease progression in patients with Alzheimer's. It could also be used as an adjunct to clinical drug trials by making it easier to track improvements over time.