Poster "model merging" Papers

20 papers found

Accurate and Efficient Low-Rank Model Merging in Core Space

Aniello Panariello, Daniel Marczak, Simone Magistri et al.

NeurIPS 2025posterarXiv:2509.17786
3
citations

CodeMerge: Codebook-Guided Model Merging for Robust Test-Time Adaptation in Autonomous Driving

Huitong Yang, Zhuoxiao Chen, Fengyi Zhang et al.

NeurIPS 2025posterarXiv:2505.16524

Continual Model Merging without Data: Dual Projections for Balancing Stability and Plasticity

Enneng Yang, Anke Tang, Li Shen et al.

NeurIPS 2025poster

DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic

Munish Monga, Vishal Chudasama, Pankaj Wasnik et al.

ICCV 2025posterarXiv:2506.21260

FREE-Merging: Fourier Transform for Efficient Model Merging

Shenghe Zheng, Hongzhi Wang

ICCV 2025posterarXiv:2411.16815
3
citations

Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs

Rui Dai, Sile Hu, Xu Shen et al.

ICLR 2025posterarXiv:2504.10902
6
citations

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

Ziyu Zhao, tao shen, Didi Zhu et al.

ICLR 2025posterarXiv:2409.16167
33
citations

Mix Data or Merge Models? Balancing the Helpfulness, Honesty, and Harmlessness of Large Language Model via Model Merging

Jinluan Yang, Dingnan Jin, Anke Tang et al.

NeurIPS 2025posterarXiv:2502.06876
13
citations

Multimodal Lego: Model Merging and Fine-Tuning Across Topologies and Modalities in Biomedicine

Konstantin Hemker, Nikola Simidjievski, Mateja Jamnik

ICLR 2025posterarXiv:2405.19950
2
citations

PLeaS - Merging Models with Permutations and Least Squares

Anshul Nasery, Jonathan Hayase, Pang Wei Koh et al.

CVPR 2025posterarXiv:2407.02447
10
citations

Task Vector Quantization for Memory-Efficient Model Merging

Youngeun Kim, Seunghwan Lee, Aecheon Jung et al.

ICCV 2025posterarXiv:2503.06921
3
citations

Towards Minimizing Feature Drift in Model Merging: Layer-wise Task Vector Fusion for Adaptive Knowledge Integration

Wenju Sun, Qingyong Li, Wen Wang et al.

NeurIPS 2025posterarXiv:2505.23859
2
citations

Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.

NeurIPS 2025posterarXiv:2505.22389

Equivariant Deep Weight Space Alignment

Aviv Navon, Aviv Shamsian, Ethan Fetaya et al.

ICML 2024poster

Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch

Le Yu, Bowen Yu, Haiyang Yu et al.

ICML 2024poster

Localizing Task Information for Improved Model Merging and Compression

Ke Wang, Nikolaos Dimitriadis, Guillermo Ortiz-Jimenez et al.

ICML 2024poster

Merging Multi-Task Models via Weight-Ensembling Mixture of Experts

Anke Tang, Li Shen, Yong Luo et al.

ICML 2024poster

On the Emergence of Cross-Task Linearity in Pretraining-Finetuning Paradigm

Zhanpeng Zhou, Zijun Chen, Yilan Chen et al.

ICML 2024poster

Representation Surgery for Multi-Task Model Merging

Enneng Yang, Li Shen, Zhenyi Wang et al.

ICML 2024poster

Training-Free Model Merging for Multi-target Domain Adaptation

Wenyi Li, Huan-ang Gao, Mingju Gao et al.

ECCV 2024posterarXiv:2407.13771
11
citations