Enric BoixAdsera
I am a final year PhD student in the EECS department at MIT, advised by Guy Bresler and Philippe Rigollet. I received my undergraduate degree in mathematics from Princeton University, where I was advised by Emmanuel Abbe. I am grateful to be generously supported by an NSF Graduate Research Fellowship, a Siebel Fellowship, and an Apple AI/ML fellowship.
Interests. deep learning, averagecase complexity, highdimensional statistics, optimal transport
Publications [sorted by year  sorted by topic]
Publications [sorted by year  sorted by topic]
2023
2022
2021
2020
2019

The AverageCase Complexity of Counting Cliques in ErdosRenyi Hypergraphs
EB, Matthew Brennan, Guy Bresler.
Foundations of Computer Science (FOCS'19).
Invited to the SIAM Journal on Computing Special Issue for FOCS 2019

SampleEfficient Active Learning of Causal Trees
Kristjan Greenewald*, Dmitriy KatzRogozhnikov*, Karthikeyan Shanmugam*, Sara Magliacane, Murat Kocaoglu, EB, Guy Bresler.
Conference on Neural Information Processing Systems (NeurIPS'19).

Subadditivity Beyond Trees and the ChiSquared Mutual Information
Emmanuel Abbe, EB.
IEEE International Symposium on Information Theory (ISIT'19).

Randomized Concurrent Set Union and Generalized WakeUp
Siddhartha Jayanti*, Robert E. Tarjan*, EB.
Symposium on Principles of Distributed Computing (PODC'19).
2018
* denotes equallycontributing first authors. In all other papers, authors contributed equally and are listed in alphabetical order.
Learning

Transformers learn through gradual rank increase
EB*, Etai Littwin*, Emmanuel Abbe, Samy Bengio, Joshua Susskind.
Preprint.

Tight conditions for when the NTK approximation is valid
EB, Etai Littwin.
Preprint.

SGD learning on neural networks: leap complexity and saddletosaddle dynamics
Emmanuel Abbe, EB, Theodor Misiakiewicz.
Conference on Learning Theory (COLT'23).

GULP: a predictionbased metric between representations
EB*, Hannah Lawrence*, George Stepaniants*, Philippe Rigollet.
Conference on Neural Information Processing Systems (NeurIPS'22).
Selected as oral

On the nonuniversality of deep learning: quantifying the cost of symmetry
Emmanuel Abbe, EB.
Conference on Neural Information Processing Systems (NeurIPS'22).

The mergedstaircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on twolayer neural networks
Emmanuel Abbe, EB, Theodor Misiakiewicz.
Conference on Learning Theory (COLT'22).

The staircase property: How hierarchical structure can guide deep learning
Emmanuel Abbe, EB, Matthew Brennan, Guy Bresler, Dheeraj Nagaraj.
Conference on Neural Information Processing Systems (NeurIPS'21).

ChowLiu++: Optimal PredictionCentric Learning of Tree Ising Models
EB, Guy Bresler, Frederic Koehler.
Foundations of Computer Science (FOCS'21).

SampleEfficient Active Learning of Causal Trees
Kristjan Greenewald*, Dmitriy KatzRogozhnikov*, Karthikeyan Shanmugam*, Sara Magliacane, Murat Kocaoglu, EB, Guy Bresler.
Conference on Neural Information Processing Systems (NeurIPS'19).

Subadditivity Beyond Trees and the ChiSquared Mutual Information
Emmanuel Abbe, EB.
IEEE International Symposium on Information Theory (ISIT'19).

An InformationPercolation Bound for Spin Synchronization on General Graphs
Emmanuel Abbe, EB.
Annals of Applied Probability (AAP).

Graph powering and spectral robustness
Emmanuel Abbe, EB, Peter Ralli, Colin Sandon.
SIAM Journal on Mathematics of Data Science (SIMODS).
Optimal Transport
Miscellaneous
* denotes equallycontributing first authors. In all other papers, authors contributed equally and are listed in alphabetical order.