"parameter-efficient fine-tuning" Papers
63 papers found • Page 1 of 2
Accurate and Efficient Low-Rank Model Merging in Core Space
Aniello Panariello, Daniel Marczak, Simone Magistri et al.
Compress to Impress: Efficient LLM Adaptation Using a Single Gradient Step on 100 Samples
Shiva Sreeram, Alaa Maalouf, Pratyusha Sharma et al.
Controllable-LPMoE: Adapting to Challenging Object Segmentation via Dynamic Local Priors from Mixture-of-Experts
Yanguang Sun, Jiawei Lian, jian Yang et al.
CrossSpectra: Exploiting Cross-Layer Smoothness for Parameter-Efficient Fine-Tuning
Yifei Zhang, Hao Zhu, Junhao Dong et al.
dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation
Emanuele Zangrando, Sara Venturini, Francesco Rinaldi et al.
Distribution-Aligned Decoding for Efficient LLM Task Adaptation
Senkang Hu, Xudong Han, Jinqi Jiang et al.
Don’t Forget the Enjoin: FocalLoRA for Instruction Hierarchical Alignment in Large Language Models
Zitong Shi, Guancheng Wan, Haixin Wang et al.
DuoLoRA : Cycle-consistent and Rank-disentangled Content-Style Personalization
Aniket Roy, Shubhankar Borse, Shreya Kadambi et al.
Enhancing Visual Prompting through Expanded Transformation Space and Overfitting Mitigation
Shohei Enomoto
F-Adapter: Frequency-Adaptive Parameter-Efficient Fine-Tuning in Scientific Machine Learning
Hangwei Zhang, Chun Kang, Yan Wang et al.
Fine-tuning with Reserved Majority for Noise Reduction
Shuyang Jiang, Yusheng Liao, Ya Zhang et al.
Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
Tong Chen, Hao Fang, Patrick Xia et al.
GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
Yeonjoon Jung, Daehyun Ahn, Hyungjun Kim et al.
Improving Model Representation and Reducing KV Cache via Skip Connections with First Value Heads
Zhoutong Wu, Yuan Zhang, Yiming Dong et al.
LiFT: Learning to Fine-Tune via Bayesian Parameter Efficient Meta Fine-Tuning
Minyoung Kim, Timothy Hospedales
Linearization Explains Fine-Tuning in Large Language Models
Zahra Rahimi Afzal, Tara Esmaeilbeig, Mojtaba Soltanalian et al.
Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation
Weibin Liao, Tianlong Wang, Yinghao Zhu et al.
Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs
Qi Wu, Yubo Zhao, Yifan Wang et al.
Multi-Token Prediction Needs Registers
Anastasios Gerontopoulos, Spyridon Gidaris, Nikos Komodakis
PaCA: Partial Connection Adaptation for Efficient Fine-Tuning
Sunghyeon Woo, Sol Namkung, SunWoo Lee et al.
PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning
Song Wang, Xiaolu Liu, Lingdong Kong et al.
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion, Liang Zhang, Bingcong Li et al.
PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches
Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun et al.
Provable Meta-Learning with Low-Rank Adaptations
Jacob Block, Sundararajan Srinivasan, Liam Collins et al.
Quantifying Elicitation of Latent Capabilities in Language Models
Elizabeth Donoway, Hailey Joren, Arushi Somani et al.
RaSA: Rank-Sharing Low-Rank Adaptation
Zhiwei He, Zhaopeng Tu, Xing Wang et al.
Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning
Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.
S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning
Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning
Somnath Basu Roy Chowdhury, Krzysztof Choromanski, Arijit Sehanobish et al.
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning
Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.
Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning
Chaoyang Li, Runze Ye, Jianyang Qin et al.
Uni-LoRA: One Vector is All You Need
Kaiyang Li, Shaobo Han, Qing Su et al.
You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM
Binqian Xu, Haiyang Mei, Zechen Bai et al.
A Multimodal, Multi-Task Adapting Framework for Video Action Recognition
Mengmeng Wang, Jiazheng Xing, Boyuan Jiang et al.
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao
ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank
Zhanjie Zhang, Quanwei Zhang, Wei Xing et al.
Asymmetry in Low-Rank Adapters of Foundation Models
Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi et al.
Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
XINYUAN GAO, Songlin Dong, Yuhang He et al.
DoRA: Weight-Decomposed Low-Rank Adaptation
Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin et al.
Dropout Mixture Low-Rank Adaptation for Visual Parameters-Efficient Fine-Tuning
Zhengyi Fang, Yue Wang, Ran Yi et al.
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
Yuhang Zhou, Zhao Zihua, Siyuan Du et al.
From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
Wei Chen, Zhen Huang, Liang Xie et al.
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
Anchun Gui, Jinqiang Ye, Han Xiao
I-MedSAM: Implicit Medical Image Segmentation with Segment Anything
Xiaobao Wei, Jiajun Cao, Yizhu Jin et al.
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks
Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens
Learning to Route Among Specialized Experts for Zero-Shot Generalization
Mohammed Muqeeth, Haokun Liu, Yufan Liu et al.
LoRA Training in the NTK Regime has No Spurious Local Minima
Uijeong Jang, Jason Lee, Ernest Ryu
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Shibo Jie, Yehui Tang, Ning Ding et al.
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
Didi Zhu, Zhongyi Sun, Zexi Li et al.
Open-Vocabulary Calibration for Fine-tuned CLIP
Shuoyuan Wang, Jindong Wang, Guoqing Wang et al.