RepLoRA: Reparameterizing Low-rank Adaptation via the Perspective of Mixture of Experts

0citations
0
Citations
#746
in ICML 2025
of 3340 papers
6
Authors
1
Data Points

Abstract

Low-rank Adaptation (LoRA) has emerged as a powerful and efficient method for fine-tuning large-scale foundation models. Despite its popularity, the theoretical understanding of LoRA has remained underexplored. In this paper, we present a theoretical analysis of LoRA by examining its connection to the Mixture of Experts models. Under this framework, we show that a simple technique, reparameterizing LoRA matrices, can notably accelerate the low-rank matrix estimation process. In particular, we prove that reparameterization can reduce the data needed to achieve a desired estimation error from an exponential to a polynomial scale. Motivated by this insight, we proposeReparameterizedLow-RankAdaptation (RepLoRA), incorporating a lightweight MLP to reparameterize the LoRA matrices. Extensive experiments across multiple domains demonstrate that RepLoRA consistently outperforms vanilla LoRA. With limited data, RepLoRA surpasses LoRA by a substantial margin of up to40.0%and achieves LoRA's performance using only30.0%of the training data, highlighting the theoretical and empirical robustness of our PEFT method.

Citation History

Jan 28, 2026
0