Poster "model distillation" Papers

10 papers found

FADA: Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG Distillation

Tianyun Zhong, Chao Liang, Jianwen Jiang et al.

CVPR 2025posterarXiv:2412.16915
5
citations

Position: Require Frontier AI Labs To Release Small "Analog" Models

Shriyash Upadhyay, Philip Quirke, Narmeen Oozeer et al.

NeurIPS 2025poster

SoundCTM: Unifying Score-based and Consistency Models for Full-band Text-to-Sound Generation

Koichi Saito, Dongjun Kim, Takashi Shibuya et al.

ICLR 2025posterarXiv:2405.18503
9
citations

SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction

Ling Yang, Zhaochen Yu, Tianjun Zhang et al.

ICLR 2025posterarXiv:2410.09008
12
citations

Towards Thinking-Optimal Scaling of Test-Time Compute for LLM Reasoning

Wenkai Yang, Shuming Ma, Yankai Lin et al.

NeurIPS 2025posterarXiv:2502.18080
96
citations

AMD: Automatic Multi-step Distillation of Large-scale Vision Models

Cheng Han, Qifan Wang, Sohail A Dianat et al.

ECCV 2024posterarXiv:2407.04208
14
citations

Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models

Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri et al.

ICML 2024poster

MGit: A Model Versioning and Management System

Wei Hao, Daniel Mendoza, Rafael Mendes et al.

ICML 2024poster

MobileNetV4: Universal Models for the Mobile Ecosystem

Danfeng Qin, Chas Leichner, Manolis Delakis et al.

ECCV 2024posterarXiv:2404.10518
407
citations

USTAD: Unified Single-model Training Achieving Diverse Scores for Information Retrieval

Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer et al.

ICML 2024poster