Enric BoixAdsera
I am a finalyear PhD student in the EECS department at MIT, advised by Guy Bresler and Philippe Rigollet. I received my undergraduate degree in mathematics from Princeton University, where I was advised by Emmanuel Abbe. I am grateful to be generously supported by an NSF Graduate Research Fellowship, a Siebel Fellowship, and an Apple AI/ML fellowship.
My research focuses on building a mathematical science of deep learning. I aim to characterize the fundamental mechanisms driving how neural networks learn, in order to enable more efficient and more trustworthy deep learning systems.
Publications [sorted by year  sorted by topic]
Publications [sorted by year  sorted by topic]
* denotes equallycontributing first authors and (αβ) denotes alphabetical order
2024
2023

When can transformers reason with abstract symbols?
EB*, Omid Saremi, Emmanuel Abbe, Samy Bengio, Etai Littwin, Joshua Susskind.
International Conference on Learning Representations (ICLR'24).

Prompts have evil twins
Rimon Melamed, Lucas H. McCabe, Tanay Wakhare, Yejin Kim, H. Howie Huang, EB.
Preprint.

Transformers learn through gradual rank increase
EB*, Etai Littwin*, Emmanuel Abbe, Samy Bengio, Joshua Susskind.
Conference on Neural Information Processing Systems (NeurIPS'23).

Tight conditions for when the NTK approximation is valid
(αβ) EB, Etai Littwin.
Transactions on Machine Learning Research (TMLR).

SGD learning on neural networks: leap complexity and saddletosaddle dynamics
(αβ) Emmanuel Abbe, EB, Theodor Misiakiewicz.
Conference on Learning Theory (COLT'23).
2022
2021
2020
2019

The AverageCase Complexity of Counting Cliques in ErdosRenyi Hypergraphs
(αβ) EB, Matthew Brennan, Guy Bresler.
Foundations of Computer Science (FOCS'19).
Invited to the SIAM Journal on Computing Special Issue for FOCS 2019

SampleEfficient Active Learning of Causal Trees
Kristjan Greenewald*, Dmitriy KatzRogozhnikov*, Karthikeyan Shanmugam*, Sara Magliacane, Murat Kocaoglu, EB, Guy Bresler.
Conference on Neural Information Processing Systems (NeurIPS'19).

Subadditivity Beyond Trees and the ChiSquared Mutual Information
(αβ) Emmanuel Abbe, EB.
IEEE International Symposium on Information Theory (ISIT'19).

Randomized Concurrent Set Union and Generalized WakeUp
Siddhartha Jayanti*, Robert E. Tarjan*, EB.
Symposium on Principles of Distributed Computing (PODC'19).
2018
* denotes equallycontributing first authors and (αβ) denotes alphabetical order
Learning

Towards a theory of model distillation
EB.
Preprint.

When can transformers reason with abstract symbols?
EB*, Omid Saremi, Emmanuel Abbe, Samy Bengio, Etai Littwin, Joshua Susskind.
International Conference on Learning Representations (ICLR'24).

Prompts have evil twins
Rimon Melamed, Lucas H. McCabe, Tanay Wakhare, Yejin Kim, H. Howie Huang, EB.
Preprint.

Transformers learn through gradual rank increase
EB*, Etai Littwin*, Emmanuel Abbe, Samy Bengio, Joshua Susskind.
Conference on Neural Information Processing Systems (NeurIPS'23).

Tight conditions for when the NTK approximation is valid
(αβ) EB, Etai Littwin.
Transactions on Machine Learning Research (TMLR).

SGD learning on neural networks: leap complexity and saddletosaddle dynamics
(αβ) Emmanuel Abbe, EB, Theodor Misiakiewicz.
Conference on Learning Theory (COLT'23).

GULP: a predictionbased metric between representations
EB*, Hannah Lawrence*, George Stepaniants*, Philippe Rigollet.
Conference on Neural Information Processing Systems (NeurIPS'22).
Selected as oral (top 8% accepted papers)

On the nonuniversality of deep learning: quantifying the cost of symmetry
(αβ) Emmanuel Abbe, EB.
Conference on Neural Information Processing Systems (NeurIPS'22).

The mergedstaircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on twolayer neural networks
(αβ) Emmanuel Abbe, EB, Theodor Misiakiewicz.
Conference on Learning Theory (COLT'22).

The staircase property: How hierarchical structure can guide deep learning
(αβ) Emmanuel Abbe, EB, Matthew Brennan, Guy Bresler, Dheeraj Nagaraj.
Conference on Neural Information Processing Systems (NeurIPS'21).

ChowLiu++: Optimal PredictionCentric Learning of Tree Ising Models
(αβ) EB, Guy Bresler, Frederic Koehler.
Foundations of Computer Science (FOCS'21).

SampleEfficient Active Learning of Causal Trees
Kristjan Greenewald*, Dmitriy KatzRogozhnikov*, Karthikeyan Shanmugam*, Sara Magliacane, Murat Kocaoglu, EB, Guy Bresler.
Conference on Neural Information Processing Systems (NeurIPS'19).

Subadditivity Beyond Trees and the ChiSquared Mutual Information
(αβ) Emmanuel Abbe, EB.
IEEE International Symposium on Information Theory (ISIT'19).

An InformationPercolation Bound for Spin Synchronization on General Graphs
(αβ) Emmanuel Abbe, EB.
Annals of Applied Probability (AAP).

Graph powering and spectral robustness
(αβ) Emmanuel Abbe, EB, Peter Ralli, Colin Sandon.
SIAM Journal on Mathematics of Data Science (SIMODS).
Optimal Transport
Miscellaneous