"low-rank adaptation" Papers

46 papers found

Accurate and Efficient Low-Rank Model Merging in Core Space

Aniello Panariello, Daniel Marczak, Simone Magistri et al.

NeurIPS 2025posterarXiv:2509.17786
3
citations

Continual Multiple Instance Learning with Enhanced Localization for Histopathological Whole Slide Image Analysis

Byung Hyun Lee, Wongi Jeong, Woojae Han et al.

ICCV 2025posterarXiv:2507.02395
1
citations

dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation

Emanuele Zangrando, Sara Venturini, Francesco Rinaldi et al.

ICLR 2025poster

Decoupling Angles and Strength in Low-rank Adaptation

Massimo Bini, Leander Girrbach, Zeynep Akata

ICLR 2025posterarXiv:2503.18225
6
citations

DEFT: Decompositional Efficient Fine-Tuning for Text-to-Image Models

Komal Kumar, Rao Anwer, Fahad Shahbaz Khan et al.

NeurIPS 2025posterarXiv:2509.22793

GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

Yeonjoon Jung, Daehyun Ahn, Hyungjun Kim et al.

NeurIPS 2025spotlightarXiv:2505.20355
1
citations

HMVLM:Human Motion-Vision-Language Model via MoE LoRA

Lei Hu, Yongjing Ye, Shihong Xia

NeurIPS 2025poster

LoRA3D: Low-Rank Self-Calibration of 3D Geometric Foundation models

Ziqi Lu, Heng Yang, Danfei Xu et al.

ICLR 2025posterarXiv:2412.07746
11
citations

LuxDiT: Lighting Estimation with Video Diffusion Transformer

Ruofan Liang, Kai He, Zan Gojcic et al.

NeurIPS 2025posterarXiv:2509.03680
5
citations

Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation

Weibin Liao, Tianlong Wang, Yinghao Zhu et al.

NeurIPS 2025posterarXiv:2508.08730
1
citations

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

Ziyu Zhao, tao shen, Didi Zhu et al.

ICLR 2025posterarXiv:2409.16167
33
citations

MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation

Shen Yuan, Yin Zheng, Taifeng Wang et al.

NeurIPS 2025posterarXiv:2506.14436
1
citations

Multi-Task Vehicle Routing Solver via Mixture of Specialized Experts under State-Decomposable MDP

Yuxin Pan, Zhiguang Cao, Chengyang GU et al.

NeurIPS 2025posterarXiv:2510.21453

PaCA: Partial Connection Adaptation for Efficient Fine-Tuning

Sunghyeon Woo, Sol Namkung, SunWoo Lee et al.

ICLR 2025posterarXiv:2503.01905
3
citations

Parameter Efficient Fine-tuning via Explained Variance Adaptation

Fabian Paischer, Lukas Hauzenberger, Thomas Schmied et al.

NeurIPS 2025posterarXiv:2410.07170
4
citations

PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning

Song Wang, Xiaolu Liu, Lingdong Kong et al.

CVPR 2025posterarXiv:2504.16023
4
citations

PoLAR: Polar-Decomposed Low-Rank Adapter Representation

Kai Lion, Liang Zhang, Bingcong Li et al.

NeurIPS 2025posterarXiv:2506.03133
3
citations

Provable Meta-Learning with Low-Rank Adaptations

Jacob Block, Sundararajan Srinivasan, Liam Collins et al.

NeurIPS 2025posterarXiv:2410.22264

RaSA: Rank-Sharing Low-Rank Adaptation

Zhiwei He, Zhaopeng Tu, Xing Wang et al.

ICLR 2025posterarXiv:2503.12576
4
citations

Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.

NeurIPS 2025posterarXiv:2506.05568
1
citations

SMoLoRA: Exploring and Defying Dual Catastrophic Forgetting in Continual Visual Instruction Tuning

Ziqi Wang, Chang Che, Qi Wang et al.

ICCV 2025posterarXiv:2411.13949
3
citations

S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning

Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.

NeurIPS 2025posterarXiv:2504.06426
2
citations

Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning

Chaoyang Li, Runze Ye, Jianyang Qin et al.

NeurIPS 2025poster

Uni-LoRA: One Vector is All You Need

Kaiyang Li, Shaobo Han, Qing Su et al.

NeurIPS 2025spotlightarXiv:2506.00799
2
citations

You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM

Binqian Xu, Haiyang Mei, Zechen Bai et al.

NeurIPS 2025poster

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Haotong Qin, Xudong Ma, Xingyu Zheng et al.

ICML 2024poster

AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA

Weitao Feng, Wenbo Zhou, Jiyan He et al.

ICML 2024poster

Asymmetry in Low-Rank Adapters of Foundation Models

Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi et al.

ICML 2024poster

BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation

Daeun Lee, Jaehong Yoon, Sung Ju Hwang

ICML 2024poster

Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation

Can Yaras, Peng Wang, Laura Balzano et al.

ICML 2024poster

Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models

Yixuan Ren, Yang Zhou, Jimei Yang et al.

ECCV 2024posterarXiv:2402.14780
47
citations

DoRA: Weight-Decomposed Low-Rank Adaptation

Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin et al.

ICML 2024poster

Dropout Mixture Low-Rank Adaptation for Visual Parameters-Efficient Fine-Tuning

Zhengyi Fang, Yue Wang, Ran Yi et al.

ECCV 2024poster
5
citations

E$^2$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Yifan Gong, Zheng Zhan, Qing Jin et al.

ICML 2024poster

Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters

Yuhang Zhou, Zhao Zihua, Siyuan Du et al.

ICML 2024poster

Flora: Low-Rank Adapters Are Secretly Gradient Compressors

Yongchang Hao, Yanshuai Cao, Lili Mou

ICML 2024poster

Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning

Subhabrata Dutta, Ishan Pandey, Joykirat Singh et al.

AAAI 2024paperarXiv:2312.05571
6
citations

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Jiawei Zhao, Zhenyu Zhang, Beidi Chen et al.

ICML 2024poster

Harmonizing Generalization and Personalization in Federated Prompt Learning

Tianyu Cui, Hongxia Li, Jingya Wang et al.

ICML 2024poster

LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training

Khoi M. Le, Trinh Pham, Tho Quan et al.

AAAI 2024paperarXiv:2401.04348
10
citations

LoRA Training in the NTK Regime has No Spurious Local Minima

Uijeong Jang, Jason Lee, Ernest Ryu

ICML 2024poster

Parameter-Efficient Fine-Tuning with Controls

Chi Zhang, Jingpu Cheng, Yanyu Xu et al.

ICML 2024poster

Parameter-Efficient Fine-Tuning with Discrete Fourier Transform

Ziqi Gao, Qichao Wang, Aochuan Chen et al.

ICML 2024poster

Recovering the Pre-Fine-Tuning Weights of Generative Models

Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen

ICML 2024poster

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024poster

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.

ICML 2024poster