Xavier Boix
I use neuroscience to understand and build
intelligent learning machines.

Research Scientist @ MIT
Xavier Boix
The Neuroscience of Learning Machines
I am pioneering the use of neuroscience to understand learning machines, ie. articulating hypothesis and testing them as if learning machines were another brain.

Currently, I am developing a theory that facilitates addressing the ongoing crisis in AI regarding lack of interpretability (ie. the "black-box problem") and data inefficiency (ie. requiring large amounts of training data, lack of robustness and poor generalization outside the training distribution).

My research lives at the intersection of engineering of learning machines, theoretical machine learning and neuroscience.
Meet my fabulous group of students and postdocs:

Ian Mason (postdoc)
Anirban Sarkar (postdoc)
Amir Rahimi (postdoc)
Vanessa D'Amario (postdoc)
Spandan Madan (Phd student, mentored by Hanspeter Pfister)
Kimberly Villalobos (PhD student, mentored by Dimitris Bertsimas)
Avi Cooper (research assistant)
Shobhita Sundaram (research assistant)

The core values of my group and I are improving transparency, reproducibility and integrity of research. We are also consciously committed to equity and justice. More here.
I work as a research scientist at MIT and am a member of the Brain and Cognitive Science department and the Sinha Lab for developmental research.

I am grateful to have received training both in machine learning and neuroscience as a postdoc at MIT in the Sinha's lab and Poggio's lab, as well as the multidisciplinary NSF Center for Brains, Minds and Machines. I obtained a doctorate from ETH Zurich (2014) in computer vision and completed a postdoc at the National University of Singapore (2015).
Recent Projects
When and How do DNNs Generalize to Out-of-distribution Category-viewpoint Combinations

1. Data diversity significantly improves OOD performance, but degrades in-distribution performance.
2. Separate architectures significantly outperform shared ones on OOD combinations, unlike in-distribution.
3. Neural specialization facilitates generalization to OOD combinations.
Opening the "black-box" of DNN-based fake new detectors

Our results show that the emergent DNNs' representations capture subtle but consistent differences in the language of fake and real news: signatures of exaggeration and other forms of rhetoric.
Publications and Code
Check out the full list of publications and code here:
Much gratitude to our sponsors and industrial partners -without them this research will have never been possible!
MIT 46-4077, 43 Vassar Street, MA 02139, Cambridge, USA


Made on