"gradient descent dynamics" Papers
7 papers found
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
Yatin Dandi, Florent Krzakala, Bruno Loureiro et al.
ICLR 2025posterarXiv:2305.18270
47
citations
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Frank Zhengqing Wu, Berfin Simsek, François Ged
ICLR 2025posterarXiv:2402.05626
2
citations
Quantitative convergence of trained neural networks to Gaussian processes
Andrea Agazzi, Eloy Mosig García, Dario Trevisan
NeurIPS 2025poster
A Dynamical Model of Neural Scaling Laws
Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan
ICML 2024poster
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri, Donghwan Lee, Hamed Hassani et al.
ICML 2024poster
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents
Yatin Dandi, Emanuele Troiani, Luca Arnaboldi et al.
ICML 2024poster
Why Do You Grok? A Theoretical Analysis on Grokking Modular Addition
Mohamad Amin Mohamadi, Zhiyuan Li, Lei Wu et al.
ICML 2024poster