Skip to main content

Quest | CBMM Seminar Series - Leyla Isik

photo of Leyla Isik
Date: February 7, 2023 | 4pm EST
Location: Singleton Auditorium, Building 46

Leyla Isik is the Clare Boothe Luce Assistant Professor in the Department of Cognitive Science at Johns Hopkins University. Her research aims to answer the question of how humans extract complex information using a combination of human neuroimaging, intracranial recordings, machine learning, and behavioral techniques. Before joining Johns Hopkins, Isik was a postdoctoral researcher at MIT and Harvard in the Center for Brains, Minds, and Machines working with Nancy Kanwisher and Gabriel Kreiman. Isik completed her PhD at MIT where she was advised by Tomaso Poggio.



The neural computations underlying real-world social interaction perception 

Abstract: Humans perceive the world in rich social detail. We effortlessly recognize not only objects and people in our environment, but also social interactions between people. The ability to perceive and understand social interactions is critical for functioning in our social world. We recently identified a brain region that selectively represents others’ social interactions in the posterior superior temporal sulcus (pSTS) across two diverse sets of controlled, animated videos. However, it is unclear how social interactions are processed in the real world where they co-vary with many other sensory and social features. In the first part of my talk, I will discuss new work using naturalistic fMRI movie paradigms and novel machine learning analyses to understand how humans process social interactions in real-world settings. We find that social interactions guide behavioral judgements and are selectively processed in the pSTS, even after controlling for the effects of other perceptual and social information, including faces, voices, and theory of mind. In the second part of my talk, I will discuss the computational implications of social interaction selectivity and present a novel graph neural network model, SocialGNN, that instantiates these insights. SocialGNN reproduces human social interaction judgements in both controlled and natural videos using only visual information, but requires relational, graph structure and processing to do so. Together, this work suggests that social interaction recognition is a core human ability that relies on specialized, structured visual representations.