Rachel A. Ryskin

ryskin@mit.edu

Curriculum Vitae

Research

How do humans communicate so effortlessly despite the imperfect nature of language input (e.g., due to speech errors, ambiguities) and the complexity of inferences involved in decoding its meaning (e.g., the speaker’s knowledge state)?

I study how individuals achieve impressively efficient language processing in the face of ambiguity, variability, and noise. I use eye-tracking and EEG to examine how people use various sources of information (visuo-spatial perspective, theory of mind, language statistics, etc.) to constrain the real-time interpretation of spoken language, as well as the learning and memory processes that underpin these representations.

In particular, I’m examining the noisy-channel inferences that allow listeners to decode the intended meaning of a sentence from input that’s been corrupted by noise (e.g., text containing typos) and how those mechanisms may differ in persons with aphasia.

In a new line of work, I’m using fMRI to understand the interplay between the neural systems involved in language and executive function and the re-organization that follows damage to the language system (i.e., in aphasia subsequent to left-hemisphere stroke).