Rachel A. Ryskin

[Google Scholar] [CV - May 2018]

Research

How do humans communicate so effortlessly despite the imperfect nature of language input (e.g., due to speech errors, ambiguities) and the complexity of inferences involved in decoding its meaning (e.g., the speaker’s knowledge state)?

I study how individuals achieve impressively efficient language processing in the face of ambiguity, variability, and noise. In particular, I use eye-tracking and, more recently, EEG to examine how people use various sources of information (visuo-spatial perspective, theory of mind, language statistics, etc.) to constrain the real-time interpretation of spoken language, as well as the learning and memory processes that underpin these representations.

In a new line of work, I’m examining the noisy-channel inferences that allow listeners to decode the intended meaning of a sentence from input that’s been corrupted by noise (e.g., text containing typos) and how those mechanisms may differ in persons with aphasia.

Academic bio

I’m currently a postdoctoral fellow working with Ted Gibson in the Department of Brain and Cognitive Sciences at MIT and Swathi Kiran in Speech, Language, and Hearing Science at Boston University. I got my PhD in Cognitive Psychology at the University of Illinois at Urbana-Champaign where I worked primarily with Sarah Brown-Schmidt and Aaron Benjamin. Before grad school, I got a B.A. in Cognitive Science from Northwestern University, where I studied memory for inaccurate information in text in David Rapp’s lab. I also worked on a project about children’s learning of relational concepts and the role of verbal labels in Dedre Gentner’s lab.