"memory-efficient training" Papers
7 papers found
COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection
Jinqi Xiao, Shen Sang, Tiancheng Zhi et al.
CVPR 2025posterarXiv:2412.00071
6
citations
Gradient Multi-Normalization for Efficient LLM Training
Meyer Scetbon, Chao Ma, Wenbo Gong et al.
NeurIPS 2025poster
3
citations
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang, Bingcong Li, Kiran Thekumparampil et al.
ICML 2024poster
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Jiawei Zhao, Zhenyu Zhang, Beidi Chen et al.
ICML 2024poster
Structured Inverse-Free Natural Gradient Descent: Memory-Efficient & Numerically-Stable KFAC
Wu Lin, Felix Dangel, Runa Eschenhagen et al.
ICML 2024poster
TinyTrain: Resource-Aware Task-Adaptive Sparse Training of DNNs at the Data-Scarce Edge
Young Kwon, Rui Li, Stylianos Venieris et al.
ICML 2024poster
ZO-AdaMU Optimizer: Adapting Perturbation by the Momentum and Uncertainty in Zeroth-Order Optimization
Shuoran Jiang, Qingcai Chen, Yang Xiang et al.
AAAI 2024paperarXiv:2312.15184
20
citations