ICCV 2025 "knowledge distillation" Papers
6 papers found
Dense2MoE: Restructuring Diffusion Transformer to MoE for Efficient Text-to-Image Generation
Youwei Zheng, Yuxi Ren, Xin Xia et al.
ICCV 2025posterarXiv:2510.09094
4
citations
General Compression Framework for Efficient Transformer Object Tracking
Lingyi Hong, Jinglun Li, Xinyu Zhou et al.
ICCV 2025posterarXiv:2409.17564
2
citations
Joint Diffusion Models in Continual Learning
Paweł Skierś, Kamil Deja
ICCV 2025posterarXiv:2411.08224
3
citations
LLaVA-KD: A Framework of Distilling Multimodal Large Language Models
Yuxuan Cai, Jiangning Zhang, Haoyang He et al.
ICCV 2025posterarXiv:2410.16236
23
citations
Local Dense Logit Relations for Enhanced Knowledge Distillation
Liuchi Xu, Kang Liu, Jinshuai Liu et al.
ICCV 2025posterarXiv:2507.15911
RCTDistill: Cross-Modal Knowledge Distillation Framework for Radar-Camera 3D Object Detection with Temporal Fusion
Geonho Bang, Minjae Seong, Jisong Kim et al.
ICCV 2025posterarXiv:2509.17712