Past Events
-
What’s Next in AI 2020 Conference
Date: Nov. 5; Nov. 12; Nov. 19, 2020 | 9am-12pm ESTLocation: WebinarLeaders agree that AI offers a competitive advantage, but only a fraction of organizations are using AI to its full potential. In this virtual event, scientists and business experts from the MIT-IBM Watson AI Lab will explain how to overcome three key barriers to implementing AI successfully — trust, scalability, and reasoning. -
Collective Intelligence
Date: September 24, 2019 | 9am - 6pm ESTLocation: Singleton Auditorium, Building 46Almost everything humans have achieved has been done by groups of people working together. Financial markets operate on this principle of collective intelligence to set prices for stocks, as do Internet search engines to answer questions asked by thousands before. Computers can make groups even smarter, but how should humans and machines interact? This workshop will explore the ways that people and machines, working separately and together, can leverage their relative strengths, resolve conflict and create value for society. -
GANocracy
Date: May 31, 2019 | 8:30am–7pmLocation: MIT Building 46 and Room 34-101This workshop and tutorial will focus on the promise of generative adversarial networks, or GANs: how we can exploit their benefits while minimizing their potential harm. Topics will include the nuts and bolts of generative models, their applications, generative art, and the science and theory of GANs. -
Intelligent Hardware Technologies
Date: May 7, 2019 | 9am-5pmLocation: Building 46 Singleton Auditorium (46-3002) and AtriumSuccessful hardware innovation in AI will not take place in isolation, but will emerge from a rich, layered, research ecosystem ranging from material science to software engineering. This workshop will explore novel, long-term opportunities for AI hardware, and feature faculty talks, panel discussion, and a poster session. -
Robust, Interpretable Deep Learning Systems
Date: November 20, 2018 | 2:30pm-6:30pmLocation: Building 46 Atrium and Singleton AuditoriumTo advance further, deep learning systems will have to become more transparent. They will need to prove they are reliable, can withstand malicious attacks, and can explain their reasoning, especially in safety-critical applications like self-driving cars. The symposium will feature faculty talks, a poster session and refreshments.