This is a decidedly academic website. My weirder and more inscrutable site including fiction, sound design, and clocks is here.
I am a sixth year PhD student in the Brain and Cognitive Sciences department at MIT. I’m advised by Josh Tenenbaum in the Computational Cognitive Science group. I also work closely with Jacob Andreas and the LINGO lab. I earned my B.S. and M.S. in computer science at Stanford.
I am deeply interested in the relationship between language and general cognition in human minds. How can we express such a broad swath of our thinking in language, from our quotidian intuitions about people and the physical world to our goals, plans, and abstract theories about how the world works? How, in turn, does language drive our downstream thought — how do we update our beliefs from what we are told, or draw on broader cognition to reason about and answer questions? And how do we construct new knowledge in language, from new concepts to entirely new sciences or theories?
My research seeks to answer these questions using empirical human evidence and formal computational tools, including probabilistic models of cognition, program synthesis, and statistical language models. I dream of computational models that coherently integrate evidence from how kids learn language and how adults use it, as well as disparate theories from linguistics, cognitive science, and AI about how we think and make meaning from language. I’m also a writer. I love a heady and intimate sentence, and would like to build models that explain even a sliver of what we get out of ones as rich and unruly as these.
I use they/them pronouns and previously published professionally as Catherine Wong 💯. Publications that haven't been updated yet appear on Google Scholar in their original form.
zyzzyva@mit.edu  /  Google Scholar  /  Github  /  weird
These publications are most representative of my current research thrusts.
Google Scholar contains the
most comprehensive and accurate list of publications.
How does language inform downstream thought?
- From word models to world models: translating from natural language to the probabilistic language of thought. [preprint, 2023] This best describes the theoretical and computational basis for much of my current work.
- Intuitive physical reasoning, and language. [CogSci, 2023]
- Intuitive social reasoning, inverse planning and language [ICML Theory of Mind workshop, 2023]
- Planning, causal explanation, and out-of-distribution language [CogSci, 2022]
How do we learn new concepts, including from language?
- Identifying concept libraries from language about object structure. [CogSci, 2023]
- Leveraging language to learn program abstractions and search heuristics. [ICML, 2021]
- DreamCoder library learning system. [PLDI, 2021] [arxiv-extended-version]
- Stitch library learning system. [POPL, 2023]
How do we communicate and draw pragmatic inferences from language?
- Language-complete ARC: communicating natural programs. [NeurIPS, 2022]
- Amortizing common pragmatic inferences about vague words. [CogSci, 2023]