Cathy Wong

Welcome to my academic website! My weirder and more inscrutable site including fiction, audio reporting, and clocks is here.

I am a third year PhD student at MIT, advised by Josh Tenenbaum in the Computational Cognitive Science group. Previously I've interned at Flatiron Health and Google (for Research, Classroom, and Glass), and did a brief but wonderful stint at the Nancy podcast for WNYC. I am grateful for support from an MIT Presidential Fellowship. I earned my B.S. and M.S. in computer science at Stanford.

I am interested in how computers can understand language as flexibly as people do, and how people do it in the first place. My research uses program synthesis, planning, and physical simulation to study how language is learned and understood across different world and linguistic contexts.

I also care a lot about the ethical and societal impact of artificial intelligence and cognitive science research -- what we do, and who gets to do it. I co-organize the BCS Philosophy Circle and am proud to serve on the MIT Institute Committee on Race and Diversity.

catwong@mit.edu  /  CV  /  Google Scholar  /  Github

profile photo
Selected Publications and Manuscripts in Preparation
2020 Leveraging natural language for program search and abstraction learning.
C. Wong, K. Ellis, J. Andreas, J. Tenenbaum
under review; oral version at AAAI Symposium on Conceptual Abstraction, 2020

We show that inducing joint grammars over natural language annotations and programs can bootstrap faster program search and abstraction learning.

2020 Concept grounding of ARC with iterated human communications.
S. Acquaviva, Y. Pu, C. Wong, M.H. Tessler, M. Nye
AAAI Symposium on Conceptual Abstraction, 2020

We leverage chains of people solving and describing conceptual reasoning tasks to suggest what concepts belong in a DSL.

2020 DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning.
K. Ellis, C. Wong, M. Nye, M. Sable-Meyer, L. Cary, L. Morales, L. Hewitt, A. Solar-Lezama, J. Tenenbaum
ArXiv 2020. paper [arxiv], code [git]

We present a neurosymbolic program induction system that alternates between writing programs that solve tasks and building libraries of abstractions from learned programs.

2019 From mental representations to neural codes: a multi-level approach.
J. Gauthier*, J. Loula*, E. Pollock*, T. Wilson* C. Wong*, *Equal contributors.
Behavioral Brain Sciences 2019. paper [BBS]

We argue that a computational approach to mental representation can constrain how we analyze behavior and neural patterns.

2018 Transfer learning with AutoML.
C. Wong, N. Houlsby, Y. Lu, A. Gesmundo
NeurIPS 2018. paper [NeurIPS]

We show that transfer learning across tasks can reduce the search time of automatically building ML architectures by over an order of magnitude.


Template from here.