"low-rank adaptation" Papers
46 papers found
Accurate and Efficient Low-Rank Model Merging in Core Space
Aniello Panariello, Daniel Marczak, Simone Magistri et al.
Continual Multiple Instance Learning with Enhanced Localization for Histopathological Whole Slide Image Analysis
Byung Hyun Lee, Wongi Jeong, Woojae Han et al.
dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation
Emanuele Zangrando, Sara Venturini, Francesco Rinaldi et al.
Decoupling Angles and Strength in Low-rank Adaptation
Massimo Bini, Leander Girrbach, Zeynep Akata
DEFT: Decompositional Efficient Fine-Tuning for Text-to-Image Models
Komal Kumar, Rao Anwer, Fahad Shahbaz Khan et al.
GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
Yeonjoon Jung, Daehyun Ahn, Hyungjun Kim et al.
HMVLM:Human Motion-Vision-Language Model via MoE LoRA
Lei Hu, Yongjing Ye, Shihong Xia
LoRA3D: Low-Rank Self-Calibration of 3D Geometric Foundation models
Ziqi Lu, Heng Yang, Danfei Xu et al.
LuxDiT: Lighting Estimation with Video Diffusion Transformer
Ruofan Liang, Kai He, Zan Gojcic et al.
Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation
Weibin Liao, Tianlong Wang, Yinghao Zhu et al.
Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering
Ziyu Zhao, tao shen, Didi Zhu et al.
MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation
Shen Yuan, Yin Zheng, Taifeng Wang et al.
Multi-Task Vehicle Routing Solver via Mixture of Specialized Experts under State-Decomposable MDP
Yuxin Pan, Zhiguang Cao, Chengyang GU et al.
PaCA: Partial Connection Adaptation for Efficient Fine-Tuning
Sunghyeon Woo, Sol Namkung, SunWoo Lee et al.
Parameter Efficient Fine-tuning via Explained Variance Adaptation
Fabian Paischer, Lukas Hauzenberger, Thomas Schmied et al.
PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning
Song Wang, Xiaolu Liu, Lingdong Kong et al.
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion, Liang Zhang, Bingcong Li et al.
Provable Meta-Learning with Low-Rank Adaptations
Jacob Block, Sundararajan Srinivasan, Liam Collins et al.
RaSA: Rank-Sharing Low-Rank Adaptation
Zhiwei He, Zhaopeng Tu, Xing Wang et al.
Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning
Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.
SMoLoRA: Exploring and Defying Dual Catastrophic Forgetting in Continual Visual Instruction Tuning
Ziqi Wang, Chang Che, Qi Wang et al.
S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning
Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.
Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning
Chaoyang Li, Runze Ye, Jianyang Qin et al.
Uni-LoRA: One Vector is All You Need
Kaiyang Li, Shaobo Han, Qing Su et al.
You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM
Binqian Xu, Haiyang Mei, Zechen Bai et al.
Accurate LoRA-Finetuning Quantization of LLMs via Information Retention
Haotong Qin, Xudong Ma, Xingyu Zheng et al.
AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA
Weitao Feng, Wenbo Zhou, Jiyan He et al.
Asymmetry in Low-Rank Adapters of Foundation Models
Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi et al.
BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation
Daeun Lee, Jaehong Yoon, Sung Ju Hwang
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras, Peng Wang, Laura Balzano et al.
Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models
Yixuan Ren, Yang Zhou, Jimei Yang et al.
DoRA: Weight-Decomposed Low-Rank Adaptation
Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin et al.
Dropout Mixture Low-Rank Adaptation for Visual Parameters-Efficient Fine-Tuning
Zhengyi Fang, Yue Wang, Ran Yi et al.
E$^2$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation
Yifan Gong, Zheng Zhan, Qing Jin et al.
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
Yuhang Zhou, Zhao Zihua, Siyuan Du et al.
Flora: Low-Rank Adapters Are Secretly Gradient Compressors
Yongchang Hao, Yanshuai Cao, Lili Mou
Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning
Subhabrata Dutta, Ishan Pandey, Joykirat Singh et al.
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Jiawei Zhao, Zhenyu Zhang, Beidi Chen et al.
Harmonizing Generalization and Personalization in Federated Prompt Learning
Tianyu Cui, Hongxia Li, Jingya Wang et al.
LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training
Khoi M. Le, Trinh Pham, Tho Quan et al.
LoRA Training in the NTK Regime has No Spurious Local Minima
Uijeong Jang, Jason Lee, Ernest Ryu
Parameter-Efficient Fine-Tuning with Controls
Chi Zhang, Jingpu Cheng, Yanyu Xu et al.
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
Ziqi Gao, Qichao Wang, Aochuan Chen et al.
Recovering the Pre-Fine-Tuning Weights of Generative Models
Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
Fangzhao Zhang, Mert Pilanci
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.