Ferran Alet
I am a Research Scientist at Google DeepMind. I recently graduated from MIT CSAIL, advised by Leslie Kaelbling and Tomas Lozano-Perez, and Josh Tenenbaum.
Research: I aim to better understand and improve machine learning generalizability. To accomplish this, I leverage techniques from meta-learning, learning to search, program synthesis, and insights from mathematics and the physical sciences. I enjoy building collaborations to work across the entire theory-application spectrum.
Twitter  / 
Email  / 
CV  / 
Google Scholar  / 
LinkedIn
|
|
|
Functional Risk Minimization
Ferran Alet ,
Clement Gehring,
Tomás Lozano-Pérez,
Joshua B. Tenenbaum,
Leslie Pack Kaelbling
under review 2022  
We suggest there is a contradiction in how we model noise in ML and derive a framework for losses in function space.
|
|
Noether Networks: meta-learning useful conserved quantities
Ferran Alet* ,
Dylan Doblar*,
Allan Zhou,
Joshua B. Tenenbaum,
Kenji Kawaguchi,
Chelsea Finn
NeurIPS 2021  
We propose to encode symmetries as conservation tailoring losses and meta-learn them from raw inputs in sequential prediction problems.
website, code,
interview(10k views)
|
|
Tailoring: encoding inductive biases by optimizing unsupervised objctives at prediction time
Ferran Alet ,
Maria Bauza,
Kenji Kawaguchi,
Nurullah Giray Kuru,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling,
NeurIPS 2021;
Workshop version was a Spotlight at the physical inductive biases workshop
We optimize unsupervised losses for the current input. By optimizing where we act, we bypass generalization gaps and can impose a wide variety of inductive biases.
15-minute talk
|
|
A large-scale benchmark for few-shot program induction and synthesis
Ferran Alet* ,
Javier Lopez-Contreras*,
James Koppel,
Maxwell Nye,
Armando Solar-Lezama,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling,
Joshua B. Tenenbaum
ICML 2021  
website
We generate a large quantity of diverse real programs by running code instruction-by-instruction and obtain I/O pairs for 200k subprograms.
|
|
Meta-learning curiosity algorithms
Ferran Alet* ,
Martin Schneider*,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling
ICLR 2020
code, press
By meta-learning programs instead of neural network weights, we can increase meta-learning generalization. We discover new algorithms in simple environments that generalize to complex ones.
|
|
Neural Relational Inference with Fast Modular Meta-learning
Ferran Alet ,
Erica Weng,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling
NeurIPS, 2019  
code
We frame neural relational inference as a case of modular meta-learning and speed up the original modular meta-learning algorithms by two orders of magnitude, making them practical.
|
|
Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGB-D video
Maria Bauza,
Ferran Alet
Yen-Chen Lin,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling,
Phillip Isola,
Alberto Rodriguez
IROS, 2019
project website /
code /
data /
press
Diverse dataset of 250 objects pushed 250 times each, all with RGB-D video. First probabilistic meta-learning benchmark.
|
|
Graph Element Networks: adaptive, structured computation and memory
Ferran Alet ,
Adarsh K. Jeewajee,
Maria Bauza,
Alberto Rodriguez,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling
ICML, 2019   (Long talk)
talk/
code
We learn to map functions to functions by combining graph networks and attention to build computational meshes and show this new framework can solve very diverse problems.
|
|
Modular meta-learning
Ferran Alet ,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling
CoRL, 2018  
video/
code
We propose to do meta-learning by training a set of neural networks to be composable, adapting to new tasks by composing modules in novel ways, similar to how we compose known words to express novel ideas.
|
|
Finding Frequent Entities in Continuous Data
Ferran Alet ,
Rohan Chitnis,
Tomás Lozano-Pérez,
Leslie Pack Kaelbling
IJCAI, 2018  
video/
People often find entities by clustering; we suggest that, instead, entities can be described as dense regions and propose a very simple algorithm for detecting them, with provable guarantees.
|
|
Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
Andy Zeng et al.
ICRA, 2018 (Best Systems Paper Award by Amazon Robotics)
talk/
project website
Description of the system for the Amazon Robotics Challenge 2017 competition, in which we won the stowing task.
|
- OpenAI, April 2022: Why adaptation is useful even if nothing changes
- Princeton, April 2022: A flexible framework of machine learning
- EPFL, March 2022: A flexible framework of machine learning
- Apple ML Research, March 2022: A flexible framework of machine learning
- DeepMind continual & meta-learning seminar, March 2022: Tailoring: why adaptation is useful even when nothing changes
- Allen Institute for AI, March 2022: Beyond monolithic models in machine learning
- CMU Scientific ML Seminar, Jan 2022: Learning to encode and discover physics-based inductive biases
- Caltech, Jan. 2022: Learning to encode and discover physics-based inductive biases
- DLBCN 2021: Learning to encode and discover inductive biases(video here)
- Meta-learning and multi-agent workshop 2020: Meta-learning and compositionality
- ICML Graph Neural Network workshop 2020: Scaling from simple problems to complex problems using modularity.
- INRIA, June 2020: Meta-learning curiosity algorithms.
- MIT Machine Learning Tea 2019: Meta-learning and combinatorial generalization.
- UC Berkeley, Nov. 2019: Meta-learning structure (slides here).
- KR2ML@IBM Workshop 2019: Graph Element Networks (slides here, video of very similar talk at ICML).
I love mentoring students and working with them. I was honored with the MIT Outstanding Direct Mentor Award '21 (given to 2 PhDs across all MIT). Here is a list of the students I've had the luck of mentoring:
Graduate students
- Shreyas Kapur(with Josh Tenenbaum); moved to UC Berkeley PhD
- Dylan Doblar; moved to Nvidia
- Martin Schneider; moved to MIT PhD, now co-founder and CEO of Remnote
- Erica Weng; moved to CMU PhD
- Adarsh K. Jeewajee; moved to Stanford PhD
- Paolo Gentili; moved to Hudson River trading
Undergraduate students
- Jan Olivetti; moved to MSc at Columbia
- Javier Lopez-Contreras; moved to visiting student at UC Berkeley
- Max Thomsen (with Maria Bauza); moved to MEng in MechE at MIT
- Catherine Wu (with Yilun Du); continued undergrad at MIT
- Nurullah Giray Kuru; continued undergrad at MIT
- Margaret Wu; continued undergrad at MIT
- Edgar Moreno; continued undergrad at UPC-CFIS
- Shengtong Zhang; continued undergrad at MIT
- Patrick John Chia ; moved to masters at Imperial College London
- Catherine Zeng; continued undergrad at Harvard
- Scott Perry; continued undergrad at MIT
|