TS-MOF: Two-Stage Multi-Objective Fine-tuning for Long-Tailed Recognition

0citations
0
Citations
#2219
in NeurIPS 2025
of 5858 papers
10
Authors
4
Data Points

Abstract

Long-Tailed Recognition (LTR) presents a significant challenge due to extreme class imbalance, where existing methods often struggle to balance performance across head and tail classes. Directly applying multi-objective optimization (MOO) to leverage multiple LTR strategies can be complex and unstable. To address this, we propose TS-MOF (Two-Stage Multi-Objective Fine-tuning), a novel framework that strategically decouples feature learning from classifier adaptation. After standard pre-training, TS-MOF freezes the feature backbone and focuses on an efficient multi-objective fine-tuning of specialized classifier heads. The core of TS-MOF's second stage lies in two innovations: Refined Performance Level Agreement for adaptive task weighting based on real-time per-class performance, and Robust Deterministic Projective Conflict Gradient for stable gradient conflict resolution and constructive fusion. This approach enables effective synergy between diverse LTR strategies, leading to significant and balanced performance improvements. Extensive experiments on CIFAR100-LT, ImageNet-LT, and iNaturalist 2018 demonstrate that TS-MOF achieves state-of-the-art results, particularly enhancing tail class accuracy (e.g., +3.3\% on CIFAR100-LT IR=100 tail) while improving head class performance, all within a remarkably short fine-tuning period of 20 epochs.

Citation History

Jan 26, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Feb 2, 2026
0