Kevin Ellis

Research Interests

What would it take to build a machine that can learn, reason, and perceive as flexibly and efficiently as a human? I investigate the hypothesis that at least part of the answer to this question involves program learning, which means inferring programs from data. Main themes of my research are:
  • Programmatic perception: Human vision is rich -- we infer shape, objects, parts of objects, and relations between objects -- and vision is also abstract: we can perceive the radial symmetry of a spiral staircase, see the forest for the trees, and also the recursion within the trees. Much of this abstract perceptual structure can be modeled and automatically recovered through different kinds of graphics program synthesis, from hand drawings, to 2D and 3D geometric models.
  • Learning to program:
  • Program induction presents two intertwined challenges: the space of all programs is infinite, and so we need a well-tuned inductive bias to organize the space of program hypotheses; and, even if we have this inductive bias, efficiently homing in on the most plausible programs is, in general, intractable. Roughly these correspond to the sample-complexity and computational-complexity of program induction. We've found ways of jointly learning both the inductive bias, and the search algorithm that will efficiently find the right programs.
  • Theory induction as program synthesis: It's not just scientists that build theories to understand and model the world: during the first dozen years of life, children learn 'intuitive theories' for number, kinship, taxonomy, physics, social interaction, and many other domains. Intelligent machines will need to similarly organize and represent their knowledge of the world in terms of modular, causal, and interpretable theories. A new direction for my research is to explore algorithms for synthesizing causal theories, with the hypothesis that generative programs are the right representational substrate on which to build a theory inducer. My collaborators and I are starting with theories of human language (work in prep), with the longer-term goal of building systems that can infer theories of the physical world.
I work in the Computational Cognitive Science Lab nd in the Computer Aided Programming group, and am coadvised by Joshua Tenenbaum and Armando Solar-Lezama. Here is my CV. You can contact me at [Last name][first letter of first name]@mit.edu

Publications

Write, Execute, Assess: Program Synthesis with a REPL. Kevin Ellis*, Maxwell Nye*, Yewen Pu*, Felix Sosa*, Josh Tenenbaum, and Armando Solar-Lezama. (* equal contribution). NeurIPS 2019. Download paper.

Learning to Infer and Execute 3D Shape Programs. Tian, Yonglong, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, and Jiajun Wu. ICLR 2019. Download paper.

Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Tim O'Donnell, Tim Sainburgh and Tim Gentner. Five ways in which computational modeling can help advance cognitive science: lessons from Artificial Grammar Learning. Topics in Cognitive Science. 2019.

Learning Libraries of Subroutines for Neurally-Guided Bayesian Program Induction. Kevin Ellis, Lucas Morales, Mathias Sablé-Meyer, Armando Solar-Lezama, Joshua B. Tenenbaum. NeurIPS 2018 Spotlight paper.. Download code and data.

Learning to Infer Graphics Programs from Hand-Drawn Images. Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, Joshua B. Tenenbaum. NeurIPS 2018 Spotlight paper. arXiv link. Download code and data.

Learning to Learn Programs from Examples: Going Beyond Program Structure. Kevin Ellis, Sumit Gulwani. IJCAI 2017. Download paper.

Sampling for Bayesian Program Learning. Kevin Ellis, Armando Solar-Lezama, Joshua B. Tenenbaum. NeurIPS 2016. Download paper. Download supplement. Download code.

Metareasoning in Symbolic Domains. Kevin Ellis, Owen Lewis. NeurIPS 2015 Workshop on Bounded Optimality and Rational Metareasoning. Download paper.

Unsupervised Learning by Program Synthesis. Kevin Ellis, Armando Solar-Lezama, Joshua B. Tenenbaum. NeurIPS 2015. Download paper. Download supplement. Download poster. Download code.

Dimensionality Reduction via Program Induction. Kevin Ellis, Eyal Dechter, Joshua B. Tenenbaum. AAAI Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. 2015: 48-52. Download paper.

Bias reformulation for one-shot function induction. Dianhuan Lin, Eyal Dechter, Kevin Ellis, Joshua B. Tenenbaum, Stephen Muggleton. ECAI 2014: 525-530. Download paper.

Learning Graphical Concepts. Kevin Ellis, Eyal Dechter, Ryan Adams, and Joshua Tenenbaum. NeurIPS 2013 workshop on Constructive Machine Learning. Download paper. Download slides. Download poster.