"catastrophic forgetting" Papers
58 papers found • Page 1 of 2
CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering
Tianyu Huai, Jie Zhou, Xingjiao Wu et al.
Continual Knowledge Adaptation for Reinforcement Learning
Jinwu Hu, ZiHao Lian, Zhiquan Wen et al.
Coreset Selection via Reducible Loss in Continual Learning
Ruilin Tong, Yuhang Liu, Javen Qinfeng Shi et al.
DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding
Wenwen Yu, Zhibo Yang, Yuliang Liu et al.
Do Your Best and Get Enough Rest for Continual Learning
Hankyul Kang, Gregor Seifer, Donghyun Lee et al.
DuET: Dual Incremental Object Detection via Exemplar-Free Task Arithmetic
Munish Monga, Vishal Chudasama, Pankaj Wasnik et al.
ESSENTIAL: Episodic and Semantic Memory Integration for Video Class-Incremental Learning
Jongseo Lee, Kyungho Bae, Kyle Min et al.
Federated Few-Shot Class-Incremental Learning
Muhammad Anwar Masum, Mahardhika Pratama, Lin Liu et al.
Hippocampal-like Sequential Editing for Continual Knowledge Updates in Large Language Models
Quntian Fang, Zhen Huang, Zhiliang Tian et al.
HMVLM:Human Motion-Vision-Language Model via MoE LoRA
Lei Hu, Yongjing Ye, Shihong Xia
Joint Diffusion Models in Continual Learning
Paweł Skierś, Kamil Deja
Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models
Jiaqi Cao, Jiarui Wang, Rubin Wei et al.
Self-Evolving Pseudo-Rehearsal for Catastrophic Forgetting with Task Similarity in LLMs
Jun Wang, Liang Ding, Shuai Wang et al.
SPFL: Sequential updates with Parallel aggregation for Enhanced Federated Learning under Category and Domain Shifts
Haoyuan Liang, Shilei Cao, Li et al.
STAR: Stability-Inducing Weight Perturbation for Continual Learning
Masih Eskandar, Tooba Imtiaz, Davin Hill et al.
STRAP: Spatio-Temporal Pattern Retrieval for Out-of-Distribution Generalization
Haoyu Zhang, WentaoZhang, Hao Miao et al.
Synthetic Data is an Elegant GIFT for Continual Vision-Language Models
Bin Wu, Wuxuan Shi, Jinqiao Wang et al.
Theory on Mixture-of-Experts in Continual Learning
Hongbo Li, Sen Lin, Lingjie Duan et al.
Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning
Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.
Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning
Chaoyang Li, Runze Ye, Jianyang Qin et al.
Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning
Gangwei Jiang, caigao jiang, Zhaoyi Li et al.
Adaptive Discovering and Merging for Incremental Novel Class Discovery
Guangyao Chen, Peixi Peng, Yangru Huang et al.
An Effective Dynamic Gradient Calibration Method for Continual Learning
Weichen Lin, Jiaxiang Chen, Ruomin Huang et al.
Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion
Linlan Huang, Xusheng Cao, Haori Lu et al.
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation
Jiyong Li, Dilshod Azizov, Yang LI et al.
CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning
Erum Mushtaq, Duygu Nur Yaldiz, Yavuz Faruk Bakman et al.
Cs2K: Class-specific and Class-shared Knowledge Guidance for Incremental Semantic Segmentation
Wei Cong, Yang Cong, Yuyang Liu et al.
Defying Imbalanced Forgetting in Class Incremental Learning
Shixiong Xu, Gaofeng Meng, Xing Nie et al.
Disentangled Continual Graph Neural Architecture Search with Invariant Modular Supernet
Zeyang Zhang, Xin Wang, Yijian Qin et al.
Doubly Perturbed Task Free Continual Learning
Byung Hyun Lee, Min-hwan Oh, Se Young Chun
DS-AL: A Dual-Stream Analytic Learning for Exemplar-Free Class-Incremental Learning
Huiping Zhuang, Run He, Kai Tong et al.
Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning
Yan Fan, Yu Wang, Pengfei Zhu et al.
Embracing Language Inclusivity and Diversity in CLIP through Continual Language Learning
Bang Yang, Yong Dai, Xuxin Cheng et al.
Few-Shot Image Generation by Conditional Relaxing Diffusion Inversion
Yu Cao, Shaogang Gong
Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning
Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu et al.
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham, The-Anh Ta, Anh Tran et al.
Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method
Jeeveswaran Kishaan, Elahe Arani, Bahram Zonooz
History Matters: Temporal Knowledge Editing in Large Language Model
Xunjian Yin, Jin Jiang, Liming Yang et al.
Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework
Qiongjie Cui, Huaijiang Sun, Bin Li et al.
Layerwise Proximal Replay: A Proximal Point Method for Online Continual Learning
Jinsoo Yoo, Yunpeng Liu, Frank Wood et al.
Learning to Continually Learn with the Bayesian Principle
Soochan Lee, Hyeonseong Jeon, Jaehyeon Son et al.
MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment
Kanglei Zhou, Liyuan Wang, Xingxing Zhang et al.
Mitigating Catastrophic Forgetting in Online Continual Learning by Modeling Previous Task Interrelations via Pareto Optimization
Yichen WU, Hong Wang, Peilin Zhao et al.
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
Didi Zhu, Zhongyi Sun, Zexi Li et al.
Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning
Bowen Zheng, Da-Wei Zhou, Han-Jia Ye et al.
Neighboring Perturbations of Knowledge Editing on Large Language Models
Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang et al.
Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment and Refinement
Fushuo Huo, Wenchao Xu, Jingcai Guo et al.
On the Diminishing Returns of Width for Continual Learning
Etash Guha, Vihan Lakshman
Quantized Prompt for Efficient Generalization of Vision-Language Models
Tianxiang Hao, Xiaohan Ding, Juexiao Feng et al.
Rapid Learning without Catastrophic Forgetting in the Morris Water Maze
Raymond L Wang, Jaedong Hwang, Akhilan Boopathy et al.