Skip to main content

Incomplete Objectives and AI Safety: The Theory and Practice of AI Alignment

photo of Dylan Hadfield-Menell in front of tree
Date: December 4, 2023 | 4pm EST
Location: Singleton Auditorium, Building 46

Abstract: For AI systems to be safe and effective, they need to be aligned with the goals and values of users, designers, and society. In this talk, I will discuss the challenges of AI alignment and go over research directions to develop safe AI systems. I'll begin with theoretical results that motivate the alignment problem broadly. In particular, I will show how optimizing incomplete goal specifications reliably causes systems to select unhelpful or harmful actions. Next, I will discuss mitigation measures that counteract this failure mode. I will focus on approaches for incorporating human feedback into objectives, interpreting and understanding learned policies, and maintaining uncertainty about intended goals.