Kanwisher Lab


Dana Boebinger (dlboebinger at gmail dot com)
I am a PhD student in the Harvard-MIT program in Speech and Hearing Bioscience and Technology , working both with Nancy Kanwisher and Josh McDermott. I use fMRI to examine the neural mechanisms that underlie human perception of complex sounds, such as speech and music. I am also interested in how perception of these complex sounds varies across people, as well as the extent to which experience shapes both auditory perceptual abilities and how sound is represented in the brain. .
Apurva Ratan Murty (ratan at mit dot edu)
I am postdoctoral research fellow with Nancy Kanwisher and Jim Dicarlo. I received my PhD in Neuroscience from the Centre for Neuroscience, Indian Institute of Science, Bangalore - India, where I studied viewpoint invariant object representations in the macaque inferotemporal cortex. Broadly, I am interested in computational mechanisms during early brain development and investigate this using a combination of brain imaging, macaque electrophysiology, behavioural psychophysics and computational modelling.
David Beeler (dsbeeler at mit dot edu)
As a lab tech I get to be involved in a lot of different projects, but I have focused on collecting and analyzing fMRI (functional magnetic resonance imaging) and DWI (diffusion weighted imaging) data. To understand brain development and the progression of brain disorders, it is important to understand how the brain is organized and what types of computations it performs. My goal is to use available technologies and techniques to uncover this functional organization, and ultimately learn something about how we think, feel, and behave.
Heather Kosakowski (hlk at mit dot edu)
At birth, the human infant brain weighs less than one pound. As infants become children and then adults, that tiny piece of tissue grows and expands to three times its birth size and is responsible for housing our entire experience as a human being. Every bit of knowledge, every cognitive capacity, and every thought humans have is a result of what is and isn’t stored by the billions of neurons in our brain. I think that is amazing! As a graduate student, I get to study the development of the functional specialization and organization of the human brain with Nancy Kanwisher and Rebecca Saxe as my advisors! .
Leyla Isik (lisik at mit dot edu)
The brain can effortlessly extract visual and social information (such as who you are looking at, what they are doing, and who they are interacting with) from complex visual scenes. As a postdoctoral researcher with Nancy Kanwisher and Gabriel Kreiman, I'm interested in studying how the brain solves this problem using neuroimaging, ECoG, and machine learning. I completed my PhD with Tomaso Poggio where I studied the dynamics of invariant object and action recognition in the human brain. To find out more about my research, you can visit my website .
Michael Cohen (michaelthecohen at gmail dot com)
I am a postdoctoral fellow in the Kanwisher Lab. Before starting at MIT, I completed my PhD at Harvard University working with Ken Nakayama and George Alvarez. While in graduate school, I used a combination of visual psychophysics and functional neuroimaging (fMRI) to characterize the relationship between perceptual capacity and the functional organization of the visual system. Broadly, I am interested in the limits of human perception and how different mechanisms (i.e. attention, neural organization, etc.) determine the contents of conscious experience. Going forward, I hope to extend the findings I have made in the realm of visual perception to other modalities and domains: auditory, long-term memory, cognitive development, and so forth. More information about me can be found on my website.
Nancy Kanwisher (ngk at mit dot edu)
Lucky me! I get to work with all the brilliant and wonderful people on this page, and to think about cool questions like these: How are objects, faces, and scenes represented in the brain, and (how) do the representations of each of these classes of stimuli differ from each other? How are visual representations affected by attention, awareness, and experience? Which mental processes get their own special patch of cortex, why is it these processes and (apparenly) not others, and how do special-purpose bits of brain arise in the first place?
Charlie (Charlie at mit dot edu)

Rosa Lafer-Sousa (rlaferso at mit dot edu)
I am a first-year graduate student in BCS seeking to shed light on the overall architecture of the ventral visual pathway and establish direct links between neural activity and perception. As an amateur visual artist, my interest in Vision stems from an interest in the intersection of Vision and Art; asking, to what extent can the study (and practice) of art inform an understanding of the visual system and vice versa? Most recently, in my pre-doctoral work in the lab of Dr. Bevil Conway, we discovered that color-selective cortical regions lie systematically adjacent to face-selective cortical regions in the ventral visual pathway in monkeys. Using fMRI in humans, my present work in the Kanwisher lab aims to 1) test whether similar functional organization is found in humans and 2) discover homologous regions in human and monkey brains that might allow us to extend the impact of knowledge obtained through invasive techniques in monkeys.
Matt Peterson (mfpeters at mit dot edu)
How does the brain learn to actively select visual information for the rapid and reliable recognition of the many socially relevant cues available from the human face? My graduate work under the supervision of Miguel Eckstein at UC Santa Barbara combined psychophysics, eye tracking, and computational modeling to understand why people consistently choose (individually) specific places to look on faces. In the Kanwisher lab, I hope to add neuroimaging to this set of techniques to investigate how eye movements interact with the neural representations of faces to achieve the impressive invariance with which the brain perceives facial identities. Additional work, in collaboration with Ken Nakayama at Harvard, will look to define and quantify the visual information that is available in dynamic social situations to guide social behavior, and the neural computations and representations responsible for these processes.
Caroline Robertson (carolinerobertson at fas dot harvard dot edu)
I'm interested in the marriage of sensory and cognitive signals in the human brain. Can we use vision to infer patterns of cognition? What neural architecture is common to both perception and thought? This intersection is particularly relevant to our understanding of mental conditions, such as Autism and ADHD, in which different patterns of higher-order cognition are mirrored in the way individuals visually engage with the world. I am a Junior Fellow in the Harvard Society of Fellows. Please see my website for more details about my research. .
Former Lab Members