About Me
Since August 2025, I am an assistant professor (tenure-track) at the University of Minnesota Twin Cities (Department of Industrial and Systems Engineering), where I work on algorithms for continuous optimization problems in theoretical computer science and machine learning.
Prior to this, I was a postdoctoral researcher at MIT (hosted by Ali Jadbabaie and Suvrit Sra) and completed my Ph.D. in 2023 from the University of Washington, Seattle, where I was mentored by Yin Tat Lee (my dissertation). During my Ph.D., I interned with Jelena Diakonikolas at the University of Wisconsin, Madison, with Di Wang and Zhe Feng (Google Market Algorithms), and with Richard Q. Zhang and David P. Woodruff (DeepMind).
Before starting optimization research, I studied Electronics/Electrical Engineering, with research on rapid diagnostic tests for malaria, and spent some time in industry as a signal processing engineer working on pulse oximeters and bluetooth speakers.
Publications
Online Bidding Under Ros Constraints Without Knowing the Value
Sushant Vijayan*, Zhe Fengα, Swati Padmanabhanα, Karthikeyan Shanmugamα, Arun Suggalaα, Di Wangα.
TheWebConf 2025
First-Order Methods for Linearly Constrained Bilevel Optimization [arXiv]
Guy Kornowskiα, Swati Padmanabhanα, Kai Wangα, Zhe Zhangα, Suvrit Sra
Neural Information Processing Systems (NeurIPS) 2024
On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control [arXiv]
Daniel Pfrommer*, Swati Padmanabhan*, Kwangjun Ahn, Jack Umenberger, Tobia Marcucci, Zakaria Mhammedi, Ali Jadbabaie
Conference on Decision and Control (CDC) 2024
Improving the Bit Complexity of Communication for Distributed Convex Optimization [arXiv]
Mehrdad Ghadiriα, Yin Tat Leeα, Swati Padmanabhanα, William Swartworthα, David P. Woodruffα, Guanghao Yeα
Symposium on Theory of Computing (STOC) 2024 [our talk]
Computing Approximate $\ell_p$ Sensitivities [arXiv], [paper]
Swati Padmanabhanα, David P. Woodruffα, Qiuyi Zhangα
Neural Information Processing Systems (NeurIPS) 2023 [slides]
Online Bidding Algorithms for Return-on-Spend Constrained Advertisers [arXiv]
Zhe Fengα, Swati Padmanabhanα, Di Wangα
TheWebConf 2023 [slides]
A Fast Scale-Invariant Algorithm for Non-negative Least Squares with Non-negative Data [arXiv]
Jelena Diakonikolasα, Chenghui Liα, Swati Padmanabhanα, Chaobing Songα
Neural Information Processing Systems (NeurIPS) 2022
A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions [arXiv], [paper]
Damek Davisα, Dmitriy Drusvyatskiyα, Yin Tat Leeα, Swati Padmanabhanα, Guanghao Yeα
Neural Information Processing Systems (NeurIPS) 2022, Selected for Oral Presentation [slides]
Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient Oracle Complexity [arXiv]
Sally Dongα, Haotian Jiangα, Yin Tat Leeα, Swati Padmanabhanα, Guanghao Yeα
Neural Information Processing Systems (NeurIPS) 2022
Computing Lewis Weights to High Precision [arXiv]
Maryam Fazelα, Yin Tat Leeα, Swati Padmanabhanα, Aaron Sidfordα
Symposium on Discrete Algorithms (SODA) 2022 [slides]
A Faster Interior-Point Method for Semidefinite Programming [arXiv]
Haotian Jiangα, Tarun Kathuriaα, Yin Tat Leeα, Swati Padmanabhanα, Zhao Songα
Foundations of Computer Science (FOCS) 2020 [talk]
An O(m/epsilon^3.5) Algorithm for Semidefinite Programs with Diagonal Constraints [arXiv]
Yin Tat Leeα, Swati Padmanabhanα
Conference on Learning Theory (COLT) 2020 [talk]
Manuscripts
Improved Sample Complexity of Imitation Learning for Barrier Model Predictive Control [arXiv]
Daniel Pfrommer*, Swati Padmanabhan*, Kwangjun Ahn, Jack Umenberger, Tobia Marcucci, Zakaria Mhammedi, Ali Jadbabaie
On the Hardness of Meaningful Local Guarantees in Nonsmooth Nonconvex Optimization [arXiv]
Guy Kornowskiα, Swati Padmanabhanα, Ohad Shamir
Presented at NeurIPS Workshop on Optimization for Machine Learning 2024
Positive Semidefinite Programming: Mixed, Parallel, and Width-Independent [arXiv]
Arun Jambulapatiα, Yin Tat Leeα, Jerry Liα, Swati Padmanabhanα, Kevin Tianα
[5min talk | 20min talk]
This paper appeared in STOC 2020. However, after the publication, we found an error in the analysis (see arXiv), which we some day hope to fix.