NeurIPS "knowledge distillation" Papers

21 papers found

ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data

Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.

NeurIPS 2025posterarXiv:2502.05567
13
citations

Better Estimation of the Kullback--Leibler Divergence Between Language Models

Afra Amini, Tim Vieira, Ryan Cotterell

NeurIPS 2025posterarXiv:2504.10637

Continuous Concepts Removal in Text-to-image Diffusion Models

Tingxu Han, Weisong Sun, Yanrong Hu et al.

NeurIPS 2025posterarXiv:2412.00580
3
citations

Distillation Robustifies Unlearning

Bruce W, Lee, Addie Foote, Alex Infanger et al.

NeurIPS 2025spotlightarXiv:2506.06278
5
citations

DKDR: Dynamic Knowledge Distillation for Reliability in Federated Learning

Yueyang Yuan, Wenke Huang, Guancheng Wan et al.

NeurIPS 2025poster

HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation

Xiaolong Xu, Xudong Zhao, Haolong Xiang et al.

NeurIPS 2025poster

Interaction-Centric Knowledge Infusion and Transfer for Open Vocabulary Scene Graph Generation

Lin Li, Chuhan ZHANG, Dong Zhang et al.

NeurIPS 2025posterarXiv:2511.05935

KINDLE: Knowledge-Guided Distillation for Prior-Free Gene Regulatory Network Inference

Rui Peng, Yuchen Lu, Qichen Sun et al.

NeurIPS 2025oralarXiv:2505.09664

Knowledge Distillation of Uncertainty using Deep Latent Factor Model

Sehyun Park, Jongjin Lee, Yunseop Shin et al.

NeurIPS 2025posterarXiv:2510.19290

Learning Task-Agnostic Representations through Multi-Teacher Distillation

Philippe Formont, Maxime Darrin, Banafsheh Karimian et al.

NeurIPS 2025posterarXiv:2510.18680

Multi-order Orchestrated Curriculum Distillation for Model-Heterogeneous Federated Graph Learning

Guancheng Wan, Xu Cheng, Run Liu et al.

NeurIPS 2025poster

Neural Tangent Knowledge Distillation for Optical Convolutional Networks

Jinlin Xiang, Minho Choi, Yubo Zhang et al.

NeurIPS 2025posterarXiv:2508.08421
1
citations

On the creation of narrow AI: hierarchy and nonlocality of neural network skills

Eric Michaud, Asher Parker-Sartori, Max Tegmark

NeurIPS 2025posterarXiv:2505.15811
2
citations

PLD: A Choice-Theoretic List-Wise Knowledge Distillation

Ejafa Bassam, Dawei Zhu, Kaigui Bian

NeurIPS 2025posterarXiv:2506.12542

Preference-driven Knowledge Distillation for Few-shot Node Classification

Xing Wei, Chunchun Chen, Rui Fan et al.

NeurIPS 2025posterarXiv:2510.10116

RUAGO: Effective and Practical Retain-Free Unlearning via Adversarial Attack and OOD Generator

SangYong Lee, Sangjun Chung, Simon Woo

NeurIPS 2025poster

Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity

Seonghoon Yu, Dongjun Nam, Dina Katabi et al.

NeurIPS 2025posterarXiv:2510.22480

Spik-NeRF: Spiking Neural Networks for Neural Radiance Fields

Gang Wan, Qinlong Lan, Zihan Li et al.

NeurIPS 2025poster

SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs

Ruyue Liu, Rong Yin, Xiangzhen Bo et al.

NeurIPS 2025posterarXiv:2510.01248
1
citations

Token-Level Self-Play with Importance-Aware Guidance for Large Language Models

Tue Le, Hoang Tran, Quyen Tran et al.

NeurIPS 2025poster

Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models

Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.

NeurIPS 2025posterarXiv:2507.07104
2
citations