NEURIPS 2025 "gradient descent" Papers
8 papers found
Complexity Scaling Laws for Neural Models using Combinatorial Optimization
Lowell Weissman, Michael Krumdick, A. Abbott
NEURIPS 2025posterarXiv:2506.12932
Convergence Rates for Gradient Descent on the Edge of Stability for Overparametrised Least Squares
Lachlan MacDonald, Hancheng Min, Leandro Palma et al.
NEURIPS 2025posterarXiv:2510.17506
Hamiltonian Descent Algorithms for Optimization: Accelerated Rates via Randomized Integration Time
Qiang Fu, Andre Wibisono
NEURIPS 2025spotlightarXiv:2505.12553
2
citations
MAP Estimation with Denoisers: Convergence Rates and Guarantees
Scott Pesme, Giacomo Meanti, Michael Arbel et al.
NEURIPS 2025posterarXiv:2507.15397
2
citations
New Perspectives on the Polyak Stepsize: Surrogate Functions and Negative Results
Francesco Orabona, Ryan D'Orazio
NEURIPS 2025posterarXiv:2505.20219
5
citations
Simple and Optimal Sublinear Algorithms for Mean Estimation
Beatrice Bertolotti, Matteo Russo, Chris Schwiegelshohn et al.
NEURIPS 2025posterarXiv:2406.05254
The Implicit Bias of Structured State Space Models Can Be Poisoned With Clean Labels
Yonatan Slutzky, Yotam Alexander, Noam Razin et al.
NEURIPS 2025spotlightarXiv:2410.10473
2
citations
Transformers are almost optimal metalearners for linear classification
Roey Magen, Gal Vardi
NEURIPS 2025posterarXiv:2510.19797
1
citations