2025 Poster "catastrophic forgetting" Papers

45 papers found

ADAPT: Attentive Self-Distillation and Dual-Decoder Prediction Fusion for Continual Panoptic Segmentation

Ze Yang, Shichao Dong, Ruibo Li et al.

ICLR 2025poster

Adapter Merging with Centroid Prototype Mapping for Scalable Class-Incremental Learning

Takuma Fukuda, Hiroshi Kera, Kazuhiko Kawamoto

CVPR 2025posterarXiv:2412.18219
11
citations

Buffer layers for Test-Time Adaptation

Hyeongyu Kim, GeonHui Han, Dosik Hwang

NEURIPS 2025posterarXiv:2510.21271

CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning

Marco P. Apolinario, Sakshi Choudhary, Kaushik Roy

ICCV 2025posterarXiv:2411.15235
1
citations

Continual Knowledge Adaptation for Reinforcement Learning

Jinwu Hu, ZiHao Lian, Zhiquan Wen et al.

NEURIPS 2025posterarXiv:2510.19314
1
citations

Continual Personalization for Diffusion Models

Yu-Chien Liao, Jr-Jen Chen, Chi-Pin Huang et al.

ICCV 2025posterarXiv:2510.02296

Continuous Subspace Optimization for Continual Learning

Quan Cheng, Yuanyu Wan, Lingyu Wu et al.

NEURIPS 2025posterarXiv:2505.11816
1
citations

Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification

Hyunji Jung, Hanseul Cho, Chulhee Yun

ICLR 2025posterarXiv:2504.12712
4
citations

Coreset Selection via Reducible Loss in Continual Learning

Ruilin Tong, Yuhang Liu, Javen Qinfeng Shi et al.

ICLR 2025poster
12
citations

Divergence-enhanced Knowledge-guided Context Optimization for Visual-Language Prompt Tuning

Yilun Li, Miaomiao Cheng, Xu Han et al.

ICLR 2025poster
6
citations

DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding

Wenwen Yu, Zhibo Yang, Yuliang Liu et al.

ICCV 2025posterarXiv:2508.08589
4
citations

Do Your Best and Get Enough Rest for Continual Learning

Hankyul Kang, Gregor Seifer, Donghyun Lee et al.

CVPR 2025posterarXiv:2503.18371
2
citations

DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic

Munish Monga, Vishal Chudasama, Pankaj Wasnik et al.

ICCV 2025posterarXiv:2506.21260

Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline Data

Zhiyuan Zhou, Andy Peng, Qiyang Li et al.

ICLR 2025posterarXiv:2412.07762
27
citations

Federated Continual Instruction Tuning

Haiyang Guo, Fanhu Zeng, Fei Zhu et al.

ICCV 2025posterarXiv:2503.12897
6
citations

Federated Few-Shot Class-Incremental Learning

Muhammad Anwar Masum, Mahardhika Pratama, Lin Liu et al.

ICLR 2025poster

Hierarchical Visual Prompt Learning for Continual Video Instance Segmentation

Jiahua Dong, Hui Yin, Wenqi Liang et al.

ICCV 2025posterarXiv:2508.08612
1
citations

Hippocampal-like Sequential Editing for Continual Knowledge Updates in Large Language Models

Quntian Fang, Zhen Huang, Zhiliang Tian et al.

NEURIPS 2025poster

HMVLM:Human Motion-Vision-Language Model via MoE LoRA

Lei Hu, Yongjing Ye, Shihong Xia

NEURIPS 2025poster

iManip: Skill-Incremental Learning for Robotic Manipulation

Zexin Zheng, Jia-Feng Cai, Xiao-Ming Wu et al.

ICCV 2025posterarXiv:2503.07087
4
citations

Joint Diffusion Models in Continual Learning

Paweł Skierś, Kamil Deja

ICCV 2025posterarXiv:2411.08224
3
citations

Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning

Xusheng Cao, Haori Lu, Linlan Huang et al.

NEURIPS 2025posterarXiv:2503.18403

LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging

Ke Wang, Nikos Dimitriadis, Alessandro Favero et al.

ICLR 2025posterarXiv:2410.17146
23
citations

Looking Beyond the Known: Towards a Data Discovery Guided Open-World Object Detection

Anay Majee, Amitesh Gangrade, Rishabh Iyer

NEURIPS 2025posterarXiv:2510.00303

LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning

Xuan Liu, Xiaobin Chang

CVPR 2025posterarXiv:2503.18985
9
citations

Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models

Jiaqi Cao, Jiarui Wang, Rubin Wei et al.

NEURIPS 2025posterarXiv:2508.09874
2
citations

Memory-Integrated Reconfigurable Adapters: A Unified Framework for Settings with Multiple Tasks

Susmit Agrawal, Krishn Vishwas Kher, Saksham Mittal et al.

NEURIPS 2025posterarXiv:2512.00940

MINGLE: Mixture of Null-Space Gated Low-Rank Experts for Test-Time Continual Model Merging

Zihuan Qiu, Yi Xu, Chiyuan He et al.

NEURIPS 2025posterarXiv:2505.11883
5
citations

One-for-More: Continual Diffusion Model for Anomaly Detection

Xiaofan Li, Xin Tan, Zhuo Chen et al.

CVPR 2025posterarXiv:2502.19848
11
citations

Online Reinforcement Learning in Non-Stationary Context-Driven Environments

Pouya Hamadanian, Arash Nasr-Esfahany, Malte Schwarzkopf et al.

ICLR 2025posterarXiv:2302.02182
3
citations

Order-Robust Class Incremental Learning: Graph-Driven Dynamic Similarity Grouping

Guannan Lai, Yujie Li, Xiangkun Wang et al.

CVPR 2025posterarXiv:2502.20032
6
citations

Pay Attention to Small Weights

chao zhou, Tom Jacobs, Advait Gadhikar et al.

NEURIPS 2025posterarXiv:2506.21374

ProtoDepth: Unsupervised Continual Depth Completion with Prototypes

Patrick Rim, Hyoungseob Park, Suchisrit Gangopadhyay et al.

CVPR 2025posterarXiv:2503.12745
5
citations

Self-Evolving Pseudo-Rehearsal for Catastrophic Forgetting with Task Similarity in LLMs

Jun Wang, Liang Ding, Shuai Wang et al.

NEURIPS 2025poster

SMoLoRA: Exploring and Defying Dual Catastrophic Forgetting in Continual Visual Instruction Tuning

Ziqi Wang, Chang Che, Qi Wang et al.

ICCV 2025posterarXiv:2411.13949
3
citations

SPFL: Sequential updates with Parallel aggregation for Enhanced Federated Learning under Category and Domain Shifts

Haoyuan Liang, Shilei Cao, Li et al.

NEURIPS 2025poster

STAR: Stability-Inducing Weight Perturbation for Continual Learning

Masih Eskandar, Tooba Imtiaz, Davin Hill et al.

ICLR 2025posterarXiv:2503.01595
5
citations

Synthetic Data is an Elegant GIFT for Continual Vision-Language Models

Bin Wu, Wuxuan Shi, Jinqiao Wang et al.

CVPR 2025posterarXiv:2503.04229
13
citations

Task-Agnostic Guided Feature Expansion for Class-Incremental Learning

Bowen Zheng, Da-Wei Zhou, Han-Jia Ye et al.

CVPR 2025posterarXiv:2503.00823
10
citations

Theory on Mixture-of-Experts in Continual Learning

Hongbo Li, Sen Lin, Lingjie Duan et al.

ICLR 2025posterarXiv:2406.16437
40
citations

Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.

NEURIPS 2025posterarXiv:2505.22389

Tripartite Weight-Space Ensemble for Few-Shot Class-Incremental Learning

Juntae Lee, Munawar Hayat, Sungrack Yun

CVPR 2025posterarXiv:2506.15720
2
citations

Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning

Chaoyang Li, Runze Ye, Jianyang Qin et al.

NEURIPS 2025poster

Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning

Gangwei Jiang, caigao jiang, Zhaoyi Li et al.

ICLR 2025posterarXiv:2502.11019
8
citations

Vision and Language Synergy for Rehearsal Free Continual Learning

Muhammad Anwar Masum, Mahardhika Pratama, Savitha Ramasamy et al.

ICLR 2025poster