(Neuro)science of deep learning for safe and fair intelligent learning machines.
Research Scientist @ MIT
Xavier Boix
(Neuro)science of deep learning
Current Research
We are addressing deep learning's lack of interpretability and data inefficiency, that is, requiring large amounts of training data, lack of robustness and poor generalization outside the training distribution.
To do so, we study deep learning with the lens of a neuroscientist by articulating hypotheses and testing them as if deep nets were another brain: theories are empirically-grounded, experiments and datasets are controlled, and individual neurons are at the core of the theories.
The core values of my group and I are improving transparency, reproducibility and integrity of research. We are also consciously committed to equity and justice. More here.
Xavier's Bio
I currently work as a research scientist at MIT. I am grateful to have received training both in machine learning and neuroscience as a postdoc at MIT in the Sinha's lab and Poggio's lab, as well as the multidisciplinary NSF Center for Brains, Minds and Machines. I obtained a doctorate from ETH Zurich (2014) in computer vision and completed a postdoc at the National University of Singapore (2015).
Recent Projects
Our most recent work revolves around these two topics: 1) Understanding and steering the behaviour of deep nets through their individual neurons. 2) Understanding the principles of modular architectures to faciliate data efficiency.
Opening the "black-box" of DNN-based fake new detectors
Our results show that the emergent DNNs' representations capture subtle but consistent differences in the language of fake and real news: signatures of exaggeration and other forms of rhetoric.
Paper
Code
Media
Publications and Code
Check out the full list of publications and code here: