lio wong

I am currently a Stanford Human-Centered Artificial Intelligence Postdoctoral Fellow working with the Cognitive Tools Lab. I also maintain affiliations with the Computational Cognitive Science and LINGO labs at MIT. I completed my PhD at MIT in Brain and Cognitive Sciences in 2025, advised by Josh Tenenbaum and Jacob Andreas. I received my B.S. and M.S. in computer science at Stanford, advised by Dan Jurafsky and Sebastian Thrun.

My research asks how people understand and learn from language. How do human minds represent and construct meaning from language — how do we usefully relate words and sentences to everything else that we know and believe? And how do people learn so much from language, including new concepts and theories that might dramatically change how we understand the world around them? I'm particularly interested in understanding how people manage to learn, understand, and usefully reason about language with so much less language experience and way fewer computational resources than any of our current best computational models.

My work seeks to answer these questions by combining theory-driven cognitive experiments with formal computational tools, including structured probabilistic models of cognition, program synthesis, and machine learning approaches . I look for approaches that can scale our theoretical and empirical picture of how we understand and learn from language, both by explaining how language relates to other domains of psychology (like intuitive physical or social cognition) and how we can unify disparate formal approaches to modeling language (like those from linguistics, cognitive science, and AI). I’m also a writer. I love a heady and intimate sentence, and would like to build models that explain even a sliver of what we get out of ones as rich and unruly as these.

I use they/them pronouns. Earlier publications appear under the name Catherine Wong šŸ’Æ.

liowong@stanford.edu  /  Google Scholar  /  Github  /  writing and clocks  /  signal hill


Research Areas

How do we understand language with so little language experience?

How do we understand language (and reason in general) with so few computational resources?

How do we build more robust and interpretable AI systems that use language?

How do people learn new concepts from language? How do people learn new concepts at all?