Convergence of Clipped SGD on Convex $(L_0,L_1)$-Smooth Functions
4citations
arXiv:2502.164924
Citations
#725
in NeurIPS 2025
of 5858 papers
3
Authors
2
Data Points
Topics
Abstract
We study stochastic gradient descent (SGD) with gradient clipping on convex functions under a generalized smoothness assumption called $(L_0,L_1)$-smoothness. Using gradient clipping, we establish a high probability convergence rate that matches the SGD rate in the $L$ smooth case up to polylogarithmic factors and additive terms. We also propose a variation of adaptive SGD with gradient clipping, which achieves the same guarantee. We perform empirical experiments to examine our theory and algorithmic choices.
Citation History
Jan 26, 2026
4
Feb 1, 2026
4