As of July, 2013, Tyler Perrachione has moved to Boston University.

Visit one of these pages instead:
Tyler Perrachione at Boston University
Communicaion Neuroscience Research Laboratory @ BU

This page is being preserved to avoid broken links, but it is not being updated.
MIT Logo
BCS Logo

Tyler Perrachione

McGovern Institute for Brain Research

Massachusetts Institute of Technology


My research addresses the cognitive and systems neuroscience of human communication, including language processing, social auditory perception, and their development. My current research efforts focus principally on three questions:


How do people recognize one another by the sound of their voice?

Jump to academic papers or stories in the popular media about this topic.

Being able to distinguish, recognize, and identify individual voices is both an important social ability and a critical piece of efficient speech perception. Voices have been called the "auditory face" because they let us tell people apart when we hear them speak. However, voices are much more than just auditory faces, because voices are also the principal medium for language, as conveyed through speech. My research on voices has focused on how information about speech and information about talker identity interact during perception.

When you say you can speak a language, you mean you know lots of things about how that language works -- what words mean, how to recognize those words when they're spoken to you, and how to say those words to other people. Knowing words means knowing the sounds (phonemes) that those words are made up of. For example, the word "cat" is made up of the sounds /k/, /æ/, /t/. Whenever different people say the word "cat", they say it just a little bit differently from one another, differences that linguists call the "phonetics" of a word. Even though you may not be consciously aware of these differences when you hear people talking (unless you notice they have an accent, in which case the differences are very large), your brain has to "hear past" those differences to understand the word they were saying was "cat". More than just hearing past those differences, your brain also learns the specific phonetic patterns of other people, which not only helps you understand the words they say better, it is also a major part of how you know who someone is when you hear them talking.

My research explores the question of how phonetic knowledge enhances human talker identification. For example, my colleagues and I have shown that you are better at identifying voices when they speak the same language you do, and are impaired at recognizing voices when you don't speak the language. Moreover, this difficulty identifying voices speaking a foreign language doesn't go away even if you practice those voices for a long time -- you need to actually know the words they're saying before you can reliably identify who was talking. Similarly, you will have more trouble recognizing voices that have a strong accent, even if they are speaking the same language you do, because the phonetic properties of their speech are so different than what you're used to. We have also shown that it is these phonetic (dialectal) properties of speech that lead listeners to think a talker is a member of a particular racial group.

Academic papers:

In the popular press:


Do developmental disorders of language and reading result from aberrant central auditory function?

More information on this topic coming soon! (2011-07-30)


What are the behavioral and neurophysiological factors associated with auditory learning and auditory expertise?

More information on this topic coming soon! (2011-07-30)