NEURIPS 2025 "training efficiency" Papers
7 papers found
Angles Don’t Lie: Unlocking Training‑Efficient RL Through the Model’s Own Signals
Qinsi Wang, Jinghan Ke, Hancheng Ye et al.
NEURIPS 2025spotlight
Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents
Han Lin, Jaemin Cho, Amir Zadeh et al.
NEURIPS 2025posterarXiv:2508.05954
6
citations
DataRater: Meta-Learned Dataset Curation
Dan Andrei Calian, Greg Farquhar, Iurii Kemaev et al.
NEURIPS 2025posterarXiv:2505.17895
7
citations
Efficient Representativeness-Aware Coreset Selection
Zihao Cheng, Binrui Wu, Zhiwei Li et al.
NEURIPS 2025poster
Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think
Ge Wu, Shen Zhang, Ruijing Shi et al.
NEURIPS 2025oralarXiv:2507.01467
27
citations
SkyLadder: Better and Faster Pretraining via Context Window Scheduling
Tongyao Zhu, Qian Liu, Haonan Wang et al.
NEURIPS 2025posterarXiv:2503.15450
2
citations
T-SHIRT: Token-Selective Hierarchical Data Selection for Instruction Tuning
Yanjun Fu, Faisal Hamman, Sanghamitra Dutta
NEURIPS 2025posterarXiv:2506.01317
6
citations