"parameter-efficient fine-tuning" Papers

67 papers found • Page 2 of 2

LoRA Training in the NTK Regime has No Spurious Local Minima

Uijeong Jang, Jason Lee, Ernest Ryu

ICML 2024poster

Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning

Shibo Jie, Yehui Tang, Ning Ding et al.

ICML 2024poster

Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models

Didi Zhu, Zhongyi Sun, Zexi Li et al.

ICML 2024poster

Open-Vocabulary Calibration for Fine-tuned CLIP

Shuoyuan Wang, Jindong Wang, Guoqing Wang et al.

ICML 2024poster

OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models

Changhun Lee, Jungyu Jin, Taesu Kim et al.

AAAI 2024paperarXiv:2306.02272
100
citations

Parameter-Efficient Fine-Tuning with Controls

Chi Zhang, Jingpu Cheng, Yanyu Xu et al.

ICML 2024poster

Parameter-Efficient Fine-Tuning with Discrete Fourier Transform

Ziqi Gao, Qichao Wang, Aochuan Chen et al.

ICML 2024poster

Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models

Yiwen Tang, Ray Zhang, Zoey Guo et al.

AAAI 2024paperarXiv:2310.03059
33
citations

Quantized Prompt for Efficient Generalization of Vision-Language Models

Tianxiang Hao, Xiaohan Ding, Juexiao Feng et al.

ECCV 2024posterarXiv:2407.10704
9
citations

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024poster

Robustness Tokens: Towards Adversarial Robustness of Transformers

Brian Pulfer, Yury Belousov, Slava Voloshynovskiy

ECCV 2024posterarXiv:2503.10191

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.

ICML 2024poster

SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation

Junjie Zhang, Chenjia Bai, Haoran He et al.

ICML 2024poster

SAM-PARSER: Fine-Tuning SAM Efficiently by Parameter Space Reconstruction

Zelin Peng, Zhengqin Xu, Zhilin Zeng et al.

AAAI 2024paperarXiv:2308.14604

SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models

Xudong LU, Aojun Zhou, Yuhui Xu et al.

ICML 2024poster

Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance

Liting Lin, Heng Fan, Zhipeng Zhang et al.

ECCV 2024posterarXiv:2403.05231
92
citations

Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts

Shengzhuang Chen, Jihoon Tack, Yunqiao Yang et al.

ICML 2024poster