Kelsey Allen

I am currently a PhD candidate under the supervision of Josh Tenenbaum in the Computational Cognitive Science group. Previously I was an intern at DeepMind, and received my B.Sc. from the University of British Columbia in Physics.

Email    Twitter    GitHub    Google Scholar   

profile photo

I am interested in cognitive science, animal cognition, robotics, and AI. I want to understand how humans generalize so well from such little data, and build machines to be equally flexible. Specifically I am mostly focused on the interactions between predictive representations and planning (particularly in the domain of tool use).

* denotes equal contribution.

Selected projects for physical reasoning

The Tools challenge: rapid trial-and-error learning in physical problem solving
Kelsey Allen*, Kevin Smith*, Josh Tenenbaum
Cognitive Science Society, 2019 (Oral Presentation)
ICLR SPiRL Workshop, 2019
RLDM, 2019

We present a new domain for testing physical problem solving skills in humans and machines. In these tool-use problems, people demonstrate both `a-ha' insights and local optimization, solving each level in 1-10 attempts. We present a model that uses model-based policy optimization with structured object-oriented priors which mimics human performance. In contrast, a deep reinforcement learning baseline is unable to generalize effectively.

Differentiable physics and stable modes for tool-use and manipulation planning
Marc Toussaint, Kelsey Allen, Kevin Smith, Joshua B Tenenbaum
Robotics: Science and Systems (RSS), 2018   (Best Paper)
[Code] [Video]

We integrate Task And Motion Planning (TAMP) with primitives that impose stable kinematic constraints or differentiable dynamical and impulse exchange constraints at the path optimization level. This enables the approach to solve a variety of physical puzzles involving tool-use and dynamic interactions, in a similar way to humans on these same puzzles.

Relational inductive bias for physical construction in humans and machines
Jessica Hamrick*, Kelsey Allen*, Victor Bapst, Tina Zhu, Kevin R McKee, Joshua B Tenenbaum, Peter W Battaglia
Cognitive Science Society, 2018  

We introduce a deep reinforcement learning agent whose policy is represented over the edges of a graph. We demonstrate that this method is able to learn how to glue blocks together to ensure a block tower is stable and generalizes across different sizes of block towers. The relational policy representation obtains super-human performance on this task.

Selected projects for few-shot learning
Few-shot Bayesian imitation learning with policies as logic over programs
Tom Silver, Kelsey Allen, Leslie Kaelbling, Josh Tenenbaum
ICLR SPiRL Workshop, 2019
RLDM, 2019
[Website] [Code] [Video]

We can learn policies from five or fewer demonstrations that generalize to dramatically different test task instances.

Infinite mixture prototypes for few-shot learning
Kelsey Allen, Evan Shelhamer*, Hanul Shin*, Josh Tenenbaum
ICML, 2019

We present infinite mixture prototypes (IMP), which adaptively adjust model capacity by representing classes as sets of clusters and inferring their number. IMP can be trained in fully or semi-supervised settings, and generalizes well to sub-class testing from super-class training (with 10-25% improvements over previous approaches).

More AI/ML projects
Residual policy learning
Tom Silver*, Kelsey Allen*, Josh Tenenbaum, Leslie Kaelbling
arXiv, 2018
[Website] [Code] [Video]

We present a simple method for learning a "residual" correction to a policy to improve nondifferentiable policies using model-free deep reinforcement learning.

Learning sparse relational transition models
Victoria Xia*, Zi Wang*, Kelsey Allen, Tom Silver, Leslie Kaelbling
ICLR, 2019

We present a method for describing and learning transition models in complex uncertain domains using relational rules.

Learning models for mode-based planning
João Loula, Tom Silver, Kelsey Allen, Josh Tenenbuam
ICML MBRL Workshop, 2019

We present a model that learns mode constraints from expert demonstrations. We show that it is data efficient, and that it learns interpretable representations that it can leverage to effectively plan in out-of-distribution environments.

End-to-end differentiable physics for learning and control
Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, J. Zico Kolter
NeurIPS, 2018 (Spotlight)

We develop a differentiable physics engine in PyTorch which can be incorporated into standard end-to-end pipelines for model-based learning and control.

Relational inductive biases, deep learning, and graph networks
Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu
arXiv, 2018
[Code] [Tutorial Video]

We summarize and provide insights on a large body of literature on Graph Neural Networks (GNNs).

More cognitive science/neuroscience projects
Discovering a symbolic planning language from continuous experience
João Loula, Tom Silver, Kelsey Allen, Josh Tenenbaum
Cognitive Science Society, 2019

We present a model that starts out with a language of low-level physical constraints and, by observing expert demonstrations, builds up a library of high-level concepts that afford planning and action understanding.

High-dimensional filtering supports context-dependent neural integration
Jonathan Gill, Kelsey Allen, Alexander Williams, Mark Goldman
Cosyne, 2019

We present a simple method for achieving context-dependent filtering of incoming signals in biologically plausible neural networks.

Integrating identification and perception: A case study of familiar and unfamiliar face processing.
Kelsey Allen, Ilker Yildirim, Joshua B Tenenbaum
Cognitive Science Society, 2016 (Oral Presentation)

We present a framework for explaining differences between familiar and unfamiliar face processing which combines feedforward neural networks with non-parametric generative models.

Go fishing! Responsibility judgments when cooperation breaks down.
Kelsey Allen, Julian Jara-Ettinger, Tobias Gerstenberg, Max Kleiman-Weiner, Joshua B Tenenbaum
Cognitive Science Society, 2015

We present a coordination game in which three agents must use their knowledge of each others' abilities in order to determine the best action to take. We show that a model which acts to maximize utility under recursive theory of mind, combined with a measure of outcome, best predicts human decisions.

From my undergraduate days (physics, ants and language)
Search for high-mass dilepton resonances in p p collisions at s= 8 TeV with the ATLAS detector
ATLAS Collaboration
Physical Review D, 2014

I contributed analyses to determine new limits for particles that might emerge from minimal Z' models.

Interactions increase forager availability and activity in harvester ants
Evlyn Pless, Jovel Queirolo, Noa Pinter-Wollman, Sam Crow, Kelsey Allen, Maya B Mathur, Deborah M Gordon
PLoS One, 2015

We analyzed the foraging patterns of red harvester ants and found that ants which had greater numbers of antennal contacts with returning foragers were more likely to leave the nest to forage.

Detecting disagreement in conversations using pseudo-monologic rhetorical structure
Kelsey Allen, Giuseppe Carenini, Raymond Ng
EMNLP, 2014

We developed a new set of features based on the automatically generated rhetorical structure of online forum conversations, and show that these features are helpful for detecting disagreement.

Template from here.