I am currently a Stanford Human-Centered Artificial Intelligence Postdoctoral Fellow working with the Cognitive Tools Lab. I also maintain affiliations with the Computational Cognitive Science and LINGO labs at MIT. I completed my PhD at MIT in Brain and Cognitive Sciences in 2025, advised by Josh Tenenbaum and Jacob Andreas. I received my B.S. and M.S. in computer science at Stanford, advised by Dan Jurafsky and Sebastian Thrun.
My research asks how people understand and learn from language. How do human minds represent and construct meaning from language ā how do we usefully relate words and sentences to everything else that we know and believe? And how do people learn so much from language, including new concepts and theories that might dramatically change how we understand the world around them? I'm particularly interested in understanding how people manage to learn, understand, and usefully reason about language with so much less language experience and way fewer computational resources than any of our current best computational models.
My work seeks to answer these questions by combining theory-driven cognitive experiments with formal computational tools, including structured probabilistic models of cognition, program synthesis, and machine learning approaches . I look for approaches that can scale our theoretical and empirical picture of how we understand and learn from language, both by explaining how language relates to other domains of psychology (like intuitive physical or social cognition) and how we can unify disparate formal approaches to modeling language (like those from linguistics, cognitive science, and AI). Iām also a writer. I love a heady and intimate sentence, and would like to build models that explain even a sliver of what we get out of ones as rich and unruly as these.
I use they/them pronouns. Earlier publications appear under the name Catherine Wong šÆ.
liowong@stanford.edu  /  Google Scholar  /  Github  /  writing and clocks  /  signal hill
Research Areas
How do we understand language with so little language experience?
- From words to worlds: bridging language and thought. [Thesis, 2024][SPP, 2023]
[Glushko Dissertation Talk @ CogSci] [Simons Talk @ Berkeley] - How do people understand language about someone's knowledge and beliefs? [NAACL, 2025]
- How do people understand language about physical scenes and events? [CogSci, 2023]
- How do people understand language about actions and goals? [ICML Theory of Mind, 2023]
- How do people understand vague adjectives in context? [CogSci, 2023]
How do we understand language (and reason in general) with so few computational resources?
- Modeling open-world cognition as on-demand synthesis of probabilistic models.
[CogSci, 2025][Talk @ CogSci] - How do people build ad-hoc theories of another agent's mental states? [ACL Findings, 2025]
- How can agents build ad-hoc environment models for planning and action? [ICLR, 2024]
How do we build more robust and interpretable AI systems that use language?
- Are current AI systems good at communicating medical information? [ICML, 2025]
- How should we design AI systems that think with and alongside us? [Nature HB, 2024]
- How do people use AI systems to solve college-level math problems? [PNAS, 2024]
- How do people communicate with each other about abstract reasoning tasks? [NeurIPS, 2022]
How do people learn new concepts from language? How do people learn new concepts at all?
- How might we learn new concepts through the process of learning new words? [ICML, 2021]
- How might we use language to predict which abstractions will be useful? [ICLR, 2024]
- Do people change the words they use to reflect concepts they have in mind? [CogSci, 2023]
- How do writing systems change to reflect the structure of a language?
[CogSci, 2024][CogSci, 2025] - How might we learn new abstract concepts over simpler mental primitives? How can we learn a more complex programming language on top of a simpler one?
[PLDI, 2021] [arxiv-extended-version] - How can we learn new abstract concepts ā much more efficiently? [POPL, 2023]