Skip to main content

Quest | CBMM Seminar Series - George Konidaris

photo of George Konidaris
Date: October 18, 2022 | 4pm EST
Location: Singleton Auditorium, Building 46

George Konidaris is an Associate Professor of Computer Science and director of the Intelligent Robot Lab at Brown, which forms part of bigAI (Brown Integrative, General AI). He is also the Chief Roboticist of Realtime Robotics, a startup based on his research on robot motion planning. Konidaris focuses on understanding how to design agents that learn abstraction hierarchies that enable fast, goal-oriented planning. He develops and applies techniques from machine learning, reinforcement learning, optimal control and planning to construct well-grounded hierarchies that result in fast planning for common cases, and are robust to uncertainty at every level of control.

 

Reintegrating AI: Skills, Symbols, and the Sensorimotor Dilemma

Abstract: AI is, at once, an immensely successful field---generating remarkable  ongoing innovation that powers whole industries---and a complete failure. Despite more than 50 years of study, the field has never  settled on a widely accepted, or even well-formulated, definition of its  primary scientific goal: designing a general intelligence. Instead it  consists of siloed subfields studying isolated aspects of intelligence, each of which is important but none of which can reasonably claim to address the problem as a whole. But intelligence is not a collection of loosely related capabilities; AI is not about learning or planning, reasoning or vision, grasping or language---it is about all of these capabilities, and how they work together to generate complex behavior. We cannot hope to make progress towards answering the overarching scientific question without a sincere and sustained effort to reintegrate the field.

My talk will describe the current working hypothesis of the Brown Integrative, General-Purpose AI (bigAI) group, which takes the form of a decision-theoretic model that could plausibly generate the full range of intelligent behavior. Our approach is explicitly structuralist: we aim to understand how to structure intelligent agent by reintegrating, rather than discarding, existing subfields into a intellectually coherent single model. The model follows from the claim that general intelligence can only coherently be ascribed to a robot, not a computer, and that the resulting interaction with the world can be well-modeled as a decision process. Such a robot faces a sensorimotor dilemma: it must necessarily operate in a very rich sensorimotor space---one sufficient to support all the tasks it must solve, but that is therefore vastly overpowered for any single one. A core (but heretofore largely neglected) requirement for general intelligence is therefore the ability to autonomously formulate streamlined, task-specific representations, of  the kind that single-task agents are typically assumed to be given. Our model also cleanly incorporates existing techniques developed in robotics, viewing them as innate knowledge about the structure of the world and the robot, and modeling them as the first few layers of a hierarchy of decision processes. Finally, our model suggests that language should ground to decision process formalisms, rather than abstract knowledge bases, text, or video, because they are the model that best characterizes the principal task facing both humans and robots.