Kanwisher Lab

The Kanwisher Lab investigates the functional organization of the human brain as a window in to the architecture of the mind. Over the last 20 years our lab has played a central role in the identification of a number of regions of the cortex in humans that are engaged in particular components of perception and cognition. Many of these regions are very specifically engaged in a single mental function such as the visual perception of faces, places, and bodies, and auditory perception of speech and music. Others selectively process abstract, uniquely human functions like understanding the mental states of others or the meaning of a sentence. Each of these regions is present in approximately the same location in virtually every normal person. This new neural portrait of the human mind reveals a vast landscape of new questions that we are tackling now about the representations, computations, and origins of each region. Much of our current work exploits the spatiotemporal resolution of intracranial recordings from neurosurgery patients, and the power of deep neural networks to test computationally precise hypotheses about what exactly the brain computes, and how and why it computes the way it does.

One major thread in our current work investigates the developmental origins of cortical specialization, including recent discoveries by grad student Heather Kosakowski (in collaboration with Saxelab) that the FFA, PPA, and EBA are present in 6-month old infants, by postdoc Ratan Murty that a “tactile FFA” is present in congenitally blind participants, and by grad student Dana Boebinger (in collaboration with McDermott lab) that music-selective neural populations are present in people with no explicit musical training.

Many ongoing lines of research in our lab are now using artificial neural networks to understand computation in the brain. One recent paper with DiCarlo lab shows that CNN-based models can very accurately predict the response of the FFA, PPA, and EBA to novel stimuli, enabling high throughput tests of the the models that could never be conducted on actual brains. Another recent paper with Fedorenko lab shows that Natural Language Processing models account for significant variance in neural responses to language, and the models that account for most variance are the ones that are best at prediction. A third line of work with Tenenbaum lab is testing computational models of physical scene understanding against neural resposes in the brain.

A fourth application of artificial neural networks asks perhaps the most fundamental question, of why the brain exhibits functional specialization in the first place. Ongoing deep net modeling work with Katharina Dobs finds that CNNs simultaneously optimized for face and object recognition spontaneously segregate these tasks into distinct processing streams. These results suggest that for face recognition, and perhaps more broadly, the domain-specific organization of the cortex reflects a computational optimization over development and/or evolution for the loss function of life.

For more information on these topics, you can browse the short videos for newcomers to the field at NancysBrainTalks, see our recent scientific publications, or check out recent videos of talks and interviews on our site here.


The Kanwisher lab now has a Twitter account! Our page:

@KanwisherLab