Hi, I'm Hunter Lang. I'm a PhD student at MIT advised by David Sontag. You can reach me at hjl at mit dot edu. I'm interested in approximate inference, stochastic optimization, and weak supervision.

Preprints:

Beyond perturbation stability: LP recovery guarantees for MAP inference on noisy stable instances (in submission).
HL, A. Reddy, D. Sontag, A. Vijayaraghavan.

Graph cuts always find a global optimum (with a catch) (in submission).
HL, D. Sontag, A. Vijayaraghavan.

Statistical adaptive stochastic gradient methods (in submission).
P. Zhang, HL, Q. Liu, L. Xiao.

Publications:

Self-supervised self-supervision by combining deep learning and probabilistic logic (AAAI 2021).
HL, H. Poon.

Using statistics to automate stochastic optimization (NeurIPS 2019).
HL, P. Zhang, L. Xiao.

Understanding the role of momentum in stochastic gradient methods (NeurIPS 2019).
I. Gitman, HL, P. Zhang, L. Xiao.

Block stability for MAP inference (AISTATS 2019, oral presentation).
HL, D. Sontag, A. Vijayaraghavan.

Optimality of approximate inference algorithms on stable instances (AISTATS 2018).
HL, D. Sontag, A. Vijayaraghavan.

Perturbation stability for approximate MAP inference (M.Eng. thesis, 2019, pdf).