NEURIPS 2025 "training acceleration" Papers
4 papers found
CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models
Zhihang Lin, Mingbao Lin, Yuan Xie et al.
NEURIPS 2025posterarXiv:2503.22342
47
citations
FALQON: Accelerating LoRA Fine-tuning with Low-Bit Floating-Point Arithmetic
Kanghyun Choi, Hyeyoon Lee, Sunjong Park et al.
NEURIPS 2025arXiv:2510.24061
MGUP: A Momentum-Gradient Alignment Update Policy for Stochastic Optimization
Da Chang, Ganzhao Yuan
NEURIPS 2025spotlight
REPA Works Until It Doesn’t: Early-Stopped, Holistic Alignment Supercharges Diffusion Training
Ziqiao Wang, Wangbo Zhao, Yuhao Zhou et al.
NEURIPS 2025poster
8
citations