2025 Poster "catastrophic forgetting" Papers
45 papers found
ADAPT: Attentive Self-Distillation and Dual-Decoder Prediction Fusion for Continual Panoptic Segmentation
Ze Yang, Shichao Dong, Ruibo Li et al.
Adapter Merging with Centroid Prototype Mapping for Scalable Class-Incremental Learning
Takuma Fukuda, Hiroshi Kera, Kazuhiko Kawamoto
Buffer layers for Test-Time Adaptation
Hyeongyu Kim, GeonHui Han, Dosik Hwang
CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning
Marco P. Apolinario, Sakshi Choudhary, Kaushik Roy
Continual Knowledge Adaptation for Reinforcement Learning
Jinwu Hu, ZiHao Lian, Zhiquan Wen et al.
Continual Personalization for Diffusion Models
Yu-Chien Liao, Jr-Jen Chen, Chi-Pin Huang et al.
Continuous Subspace Optimization for Continual Learning
Quan Cheng, Yuanyu Wan, Lingyu Wu et al.
Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification
Hyunji Jung, Hanseul Cho, Chulhee Yun
Coreset Selection via Reducible Loss in Continual Learning
Ruilin Tong, Yuhang Liu, Javen Qinfeng Shi et al.
Divergence-enhanced Knowledge-guided Context Optimization for Visual-Language Prompt Tuning
Yilun Li, Miaomiao Cheng, Xu Han et al.
DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding
Wenwen Yu, Zhibo Yang, Yuliang Liu et al.
Do Your Best and Get Enough Rest for Continual Learning
Hankyul Kang, Gregor Seifer, Donghyun Lee et al.
DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic
Munish Monga, Vishal Chudasama, Pankaj Wasnik et al.
Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline Data
Zhiyuan Zhou, Andy Peng, Qiyang Li et al.
Federated Continual Instruction Tuning
Haiyang Guo, Fanhu Zeng, Fei Zhu et al.
Federated Few-Shot Class-Incremental Learning
Muhammad Anwar Masum, Mahardhika Pratama, Lin Liu et al.
Hierarchical Visual Prompt Learning for Continual Video Instance Segmentation
Jiahua Dong, Hui Yin, Wenqi Liang et al.
Hippocampal-like Sequential Editing for Continual Knowledge Updates in Large Language Models
Quntian Fang, Zhen Huang, Zhiliang Tian et al.
HMVLM:Human Motion-Vision-Language Model via MoE LoRA
Lei Hu, Yongjing Ye, Shihong Xia
iManip: Skill-Incremental Learning for Robotic Manipulation
Zexin Zheng, Jia-Feng Cai, Xiao-Ming Wu et al.
Joint Diffusion Models in Continual Learning
Paweł Skierś, Kamil Deja
Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning
Xusheng Cao, Haori Lu, Linlan Huang et al.
LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging
Ke Wang, Nikos Dimitriadis, Alessandro Favero et al.
Looking Beyond the Known: Towards a Data Discovery Guided Open-World Object Detection
Anay Majee, Amitesh Gangrade, Rishabh Iyer
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning
Xuan Liu, Xiaobin Chang
Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models
Jiaqi Cao, Jiarui Wang, Rubin Wei et al.
Memory-Integrated Reconfigurable Adapters: A Unified Framework for Settings with Multiple Tasks
Susmit Agrawal, Krishn Vishwas Kher, Saksham Mittal et al.
MINGLE: Mixture of Null-Space Gated Low-Rank Experts for Test-Time Continual Model Merging
Zihuan Qiu, Yi Xu, Chiyuan He et al.
One-for-More: Continual Diffusion Model for Anomaly Detection
Xiaofan Li, Xin Tan, Zhuo Chen et al.
Online Reinforcement Learning in Non-Stationary Context-Driven Environments
Pouya Hamadanian, Arash Nasr-Esfahany, Malte Schwarzkopf et al.
Order-Robust Class Incremental Learning: Graph-Driven Dynamic Similarity Grouping
Guannan Lai, Yujie Li, Xiangkun Wang et al.
Pay Attention to Small Weights
chao zhou, Tom Jacobs, Advait Gadhikar et al.
ProtoDepth: Unsupervised Continual Depth Completion with Prototypes
Patrick Rim, Hyoungseob Park, Suchisrit Gangopadhyay et al.
Self-Evolving Pseudo-Rehearsal for Catastrophic Forgetting with Task Similarity in LLMs
Jun Wang, Liang Ding, Shuai Wang et al.
SMoLoRA: Exploring and Defying Dual Catastrophic Forgetting in Continual Visual Instruction Tuning
Ziqi Wang, Chang Che, Qi Wang et al.
SPFL: Sequential updates with Parallel aggregation for Enhanced Federated Learning under Category and Domain Shifts
Haoyuan Liang, Shilei Cao, Li et al.
STAR: Stability-Inducing Weight Perturbation for Continual Learning
Masih Eskandar, Tooba Imtiaz, Davin Hill et al.
Synthetic Data is an Elegant GIFT for Continual Vision-Language Models
Bin Wu, Wuxuan Shi, Jinqiao Wang et al.
Task-Agnostic Guided Feature Expansion for Class-Incremental Learning
Bowen Zheng, Da-Wei Zhou, Han-Jia Ye et al.
Theory on Mixture-of-Experts in Continual Learning
Hongbo Li, Sen Lin, Lingjie Duan et al.
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning
Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.
Tripartite Weight-Space Ensemble for Few-Shot Class-Incremental Learning
Juntae Lee, Munawar Hayat, Sungrack Yun
Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning
Chaoyang Li, Runze Ye, Jianyang Qin et al.
Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning
Gangwei Jiang, caigao jiang, Zhaoyi Li et al.
Vision and Language Synergy for Rehearsal Free Continual Learning
Muhammad Anwar Masum, Mahardhika Pratama, Savitha Ramasamy et al.