Past Events
-
Quest | CBMM Seminar Series - Giorgio Metta
Date: March 26, 2024 | 4pm ESTLocation: Singleton Auditorium, Building 46The iCub is a humanoid robot designed to support research in embodied AI. At 104 cm tall, the iCub is the size of a five-year-old child, and can crawl on all fours, walk, and sit up. Its hands support sophisticated manipulation skills. The iCub is distributed as Open Source following the GPL licenses. More than 50 robots have been built so far which are available in laboratories across Europe, US, Korea, Singapore, and Japan. -
Mission Update - Language
Date: March 19, 2024 | 4pm ESTLocation: Quest Conference Room, 45-792Large language models are fundamental building blocks in many modern AI systems—for language processing, as well as robotics, computer vision, software engineering, and more. For models trained on text to be useful for general AI and scientific applications, they must understand not just the structure of language, but the structure of the world; moreover, their language, reasoning, and world knowledge capabilities must align with those in humans. -
Quest | CBMM Seminar Series - Tom Griffiths
Date: March 12, 2024 | 4pm ESTLocation: Singleton Auditorium, Building 46Tom Griffiths develops mathematical models of higher-level cognition to understand the formal principles underlying our ability to solve everyday computational problems. His current focus on inductive problems — probabilistic reasoning, learning causal relationships, acquiring and using language, and inferring the structure of categories — is addressed by comparing human behavior to optimal computational solutions. -
Navigating perceptual space with neural perturbations
Date: Tuesday, Feb. 27, 3:00 p.m. (note time change)Location: 46-5165 (MIBR Reading Room)Special Research Talk, Arash Afraz, Ph.D. Dr. Afraz received his MD from Tehran University of Medical Sciences in 2003 and his PhD in Psychology from Harvard University in 2009. He joined NIMH at NIH as a principal investigator in 2017 to lead the unit on Neurons, Circuits and Behavior. -
Quest | CBMM Seminar Series - Alexander Borst
Date: February 14, 2024 | 2pm ESTLocation: Singleton Auditorium, Building 46Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. -
Quest | CBMM Seminar Series - Yael Niv
Date: February 6, 2024 | 4pm ESTLocation: Singleton Auditorium, Building 46The Niv lab focuses on the neural and computational processes underlying reinforcement learning and decision-making, studying the ongoing day-to-day processes by which animals and humans learn from trial and error. Of particular interest is how attention and memory processes interact with reinforcement learning. -
Quest | CBMM Seminar Series - Daniel Wolpert
Date: December 5 2023 | 4pm ESTLocation: Singleton Auditorium, Building 46Humans spend a lifetime learning, storing and refining a repertoire of motor memories appropriate for the multitude of tasks we perform. However, it is unknown what principle underlies the way our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. I will review our recent work on how humans learn to make skilled movements focussing on how statistical learning can lead to multimodal object representations, how we represent the dynamics of objects, the role of context in the expression, updating and creation of motor memories and how families of objects are learned. -
Quest | CBMM Seminar Series - Dylan Hadfield-Menell
Date: December 4, 2023 | 4pm ESTLocation: Singleton Auditorium, Building 46For AI systems to be safe and effective, they need to be aligned with the goals and values of users, designers, and society. In this talk, I will discuss the challenges of AI alignment and go over research directions to develop safe AI systems. I'll begin with theoretical results that motivate the alignment problem broadly. In particular, I will show how optimizing incomplete goal specifications reliably causes systems to select unhelpful or harmful actions. Next, I will discuss mitigation measures that counteract this failure mode. I will focus on approaches for incorporating human feedback into objectives, interpreting and understanding learned policies, and maintaining uncertainty about intended goals.