Jacob Andreas

I'm interested in language as a communicative and computational tool. People learn to understand and generate novel utterances from remarkably little data. Having learned language, we use it acquire new ideas and to structure our reasoning. Current machine learning techniques fall short of human abilities in both their capacity to learn language and learn from language about the rest of the world. My research aims to (1) understand the computational mechanisms that make efficient language learning possible, and (2) build general-purpose intelligent systems that can communicate effectively with humans and learn from human guidance.

I'm an assistant professor at MIT in EECS and CSAIL. I did my PhD work at Berkeley, where I was a member of the Berkeley NLP Group and the Berkeley AI Research Lab. I've also spent time with the Cambridge NLIP Group, and the Center for Computational Learning Systems and NLP Group at Columbia.

Prospective students: apply through the MIT graduate admissions portal in the fall. (I'm afraid I can't respond to emails individually.)

jda@mit.edu, Curriculum vitæ, Google scholar, elsewhere


Talks / Papers


Some current research directions:

Learning from language

Much of humans' abstract knowledge comes from abstract descriptions, but almost all machine learning research focuses on learning from comparatively low-level demonstrations or interactions. How do we enable more natural and efficient learning from natural language supervision instead?

Papers:
Modular multitask reinforcement learning with policy sketches (ICML 2017)
Learning with latent language (NAACL 2018)

Interpretation and explanation

How can we help humans understand the features and representational strategies that black-box machine learning algorithms discover? To what extent do these strategies reflect abstractions that we already have names for?

Papers:
Translating neuralese (ACL 2017)
Analogs of linguistic structure in deep representations (EMNLP 2017)

Compositionality and generalization

Compositionality and modularity are core features of representational systems in language, software and biology. But what problem do compositional representations actually solve? Can we use descriptions of abstract compositional structure in one domain (e.g. language) to learn modular representations in another (e.g. vision)?

Papers:
Neural module networks (CVPR 2016)
Measuring compositionality in representation learning (ICLR 2019)

I'm also interested in trees, graphs, games, and pianos.


Collaboration graph trivia: My Erdős number is at most three (J Andreas to R Kleinberg to L Lovász to P Erdős). My Kevin Bacon number (and consequently my Erdős-Bacon number) remains lamentably undefined, but my Kevin Knight number (since apparently that's a thing) is one. I have never starred in a film with Kevin Knight. Noam Chomsky is my great-great-grand-advisor (J Andreas to D Klein to C Manning to J Bresnan to N Chomsky).


Opinionated bibliographies on:
module networks
language and behavior