CVPR 2025 "few-shot learning" Papers
14 papers found
3D Prior Is All You Need: Cross-Task Few-shot 2D Gaze Estimation
Yihua Cheng, Hengfei Wang, Zhongqun Zhang et al.
CVPR 2025posterarXiv:2502.04074
1
citations
A Flag Decomposition for Hierarchical Datasets
Nathan Mankovich, Ignacio Santamaria, Gustau Camps-Valls et al.
CVPR 2025posterarXiv:2502.07782
3
citations
Attribute-formed Class-specific Concept Space: Endowing Language Bottleneck Model with Better Interpretability and Scalability
Jianyang Zhang, Qianli Luo, Guowu Yang et al.
CVPR 2025posterarXiv:2503.20301
Concept Replacer: Replacing Sensitive Concepts in Diffusion Models via Precision Localization
lingyun zhang, Yu Xie, Yanwei Fu et al.
CVPR 2025posterarXiv:2412.01244
5
citations
DefectFill: Realistic Defect Generation with Inpainting Diffusion Model for Visual Inspection
Jaewoo Song, Daemin Park, Kanghyun Baek et al.
CVPR 2025highlightarXiv:2503.13985
6
citations
FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields
Kwan Yun, Chaelin Kim, Hangyeul Shin et al.
CVPR 2025posterarXiv:2503.17095
1
citations
Generalized Few-shot 3D Point Cloud Segmentation with Vision-Language Model
Zhaochong An, Guolei Sun, Yun Liu et al.
CVPR 2025posterarXiv:2503.16282
10
citations
Logits DeConfusion with CLIP for Few-Shot Learning
Shuo Li, Fang Liu, Zehua Hao et al.
CVPR 2025posterarXiv:2504.12104
6
citations
Provoking Multi-modal Few-Shot LVLM via Exploration-Exploitation In-Context Learning
Cheng Chen, Yunpeng Zhai, Yifan Zhao et al.
CVPR 2025posterarXiv:2506.09473
1
citations
Self-Evolving Visual Concept Library using Vision-Language Critics
Atharva Sehgal, Patrick Yuan, Ziniu Hu et al.
CVPR 2025posterarXiv:2504.00185
2
citations
Test-Time Visual In-Context Tuning
Jiahao Xie, Alessio Tonioni, Nathalie Rauschmayr et al.
CVPR 2025posterarXiv:2503.21777
4
citations
The Devil is in Low-Level Features for Cross-Domain Few-Shot Segmentation
Yuhan Liu, Yixiong Zou, Yuhua Li et al.
CVPR 2025posterarXiv:2503.21150
Tripartite Weight-Space Ensemble for Few-Shot Class-Incremental Learning
Juntae Lee, Munawar Hayat, Sungrack Yun
CVPR 2025posterarXiv:2506.15720
2
citations
UniVAD: A Training-free Unified Model for Few-shot Visual Anomaly Detection
Zhaopeng Gu, Bingke Zhu, Guibo Zhu et al.
CVPR 2025posterarXiv:2412.03342
22
citations