From Promise to Practice: Realizing High-performance Decentralized Training

0
Citations
#2223
in ICLR 2025
of 3827 papers
4
Authors
3
Data Points

Abstract

Decentralized training of deep neural networks has attracted significant attention for its theoretically superior scalability compared to synchronous data-parallel methods like All-Reduce. However, realizing this potential in multi-node training is challenging due to the complex design space that involves communication topologies, computation patterns, and optimization algorithms. This paper identifies three key factors that can lead to speedups over All-Reduce training and constructs a runtime model to determine when and how decentralization can shorten the per-iteration runtimes. To support the decentralized training of transformer-based models, we introduce a decentralized Adam algorithm that overlaps communications with computations, prove its convergence, and propose an accumulation technique to mitigate the high variance caused by small local batch sizes. We deploy our solution in clusters with up to 64 GPUs, demonstrating its practical advantages in both runtime and generalization performance under a fixed iteration budget.The experiment code is open-source athttps://github.com/WangZesen/Decentralized-Training-Exp, and the extension code is open-source athttps://github.com/WangZesen/Decent-DP.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0