Hi, I'm Hunter Lang. I'm a PhD student at MIT advised by David Sontag. You can reach me at hjl@mit.edu. I'm interested in approximate inference, stochastic optimization, and weak supervision.

Publications:

Graph cuts always find a global optimum for Potts models (with a catch) (ICML 2021, long oral presentation).
HL, D. Sontag, A. Vijayaraghavan.

Beyond perturbation stability: LP recovery guarantees for MAP inference on noisy stable instances (AISTATS 2021).
HL*, A. Reddy*, D. Sontag, A. Vijayaraghavan.
*equal contribution

Self-supervised self-supervision by combining deep learning and probabilistic logic (AAAI 2021).
HL, H. Poon.

Using statistics to automate stochastic optimization (NeurIPS 2019).
HL, P. Zhang, L. Xiao.

Understanding the role of momentum in stochastic gradient methods (NeurIPS 2019).
I. Gitman, HL, P. Zhang, L. Xiao.

Block stability for MAP inference (AISTATS 2019, oral presentation).
HL, D. Sontag, A. Vijayaraghavan.

Optimality of approximate inference algorithms on stable instances (AISTATS 2018).
HL, D. Sontag, A. Vijayaraghavan.

Preprints:

Statistical adaptive stochastic gradient methods (2020).
P. Zhang, HL, Q. Liu, L. Xiao.