NEURIPS 2025 "knowledge distillation" Papers

31 papers found

ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data

Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.

NEURIPS 2025posterarXiv:2502.05567
13
citations

Better Estimation of the Kullback--Leibler Divergence Between Language Models

Afra Amini, Tim Vieira, Ryan Cotterell

NEURIPS 2025posterarXiv:2504.10637

Continuous Concepts Removal in Text-to-image Diffusion Models

Tingxu Han, Weisong Sun, Yanrong Hu et al.

NEURIPS 2025posterarXiv:2412.00580
3
citations

Distillation Robustifies Unlearning

Bruce W, Lee, Addie Foote, Alex Infanger et al.

NEURIPS 2025spotlightarXiv:2506.06278
5
citations

DKDR: Dynamic Knowledge Distillation for Reliability in Federated Learning

Yueyang Yuan, Wenke Huang, Guancheng Wan et al.

NEURIPS 2025poster

Enhanced Expert Merging for Mixture-of-Experts in Graph Foundation Models

Lei Liu, Xingyu Xia, Qianqian Xie et al.

NEURIPS 2025poster

Few-Shot Knowledge Distillation of LLMs With Counterfactual Explanations

Faisal Hamman, Pasan Dissanayake, Yanjun Fu et al.

NEURIPS 2025posterarXiv:2510.21631
1
citations

Fin3R: Fine-tuning Feed-forward 3D Reconstruction Models via Monocular Knowledge Distillation

Weining Ren, Hongjun Wang, Xiao Tan et al.

NEURIPS 2025posterarXiv:2511.22429

HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation

Xiaolong Xu, Xudong Zhao, Haolong Xiang et al.

NEURIPS 2025poster

Interaction-Centric Knowledge Infusion and Transfer for Open Vocabulary Scene Graph Generation

Lin Li, Chuhan ZHANG, Dong Zhang et al.

NEURIPS 2025posterarXiv:2511.05935

KINDLE: Knowledge-Guided Distillation for Prior-Free Gene Regulatory Network Inference

Rui Peng, Yuchen Lu, Qichen Sun et al.

NEURIPS 2025oralarXiv:2505.09664

Knowledge Distillation of Uncertainty using Deep Latent Factor Model

Sehyun Park, Jongjin Lee, Yunseop Shin et al.

NEURIPS 2025posterarXiv:2510.19290

Learning Task-Agnostic Representations through Multi-Teacher Distillation

Philippe Formont, Maxime Darrin, Banafsheh Karimian et al.

NEURIPS 2025posterarXiv:2510.18680

Multi-order Orchestrated Curriculum Distillation for Model-Heterogeneous Federated Graph Learning

Guancheng Wan, Xu Cheng, Run Liu et al.

NEURIPS 2025poster

MURKA: Multi-Reward Reinforcement Learning with Knowledge Alignment for Optimization Tasks

WANTONG XIE, Yi-Xiang Hu, Jieyang Xu et al.

NEURIPS 2025poster

Neural Tangent Knowledge Distillation for Optical Convolutional Networks

Jinlin Xiang, Minho Choi, Yubo Zhang et al.

NEURIPS 2025posterarXiv:2508.08421
1
citations

On the creation of narrow AI: hierarchy and nonlocality of neural network skills

Eric Michaud, Asher Parker-Sartori, Max Tegmark

NEURIPS 2025posterarXiv:2505.15811
2
citations

PLD: A Choice-Theoretic List-Wise Knowledge Distillation

Ejafa Bassam, Dawei Zhu, Kaigui Bian

NEURIPS 2025posterarXiv:2506.12542

Preference Distillation via Value based Reinforcement Learning

Minchan Kwon, Junwon Ko, Kangil kim et al.

NEURIPS 2025posterarXiv:2509.16965

Preference-driven Knowledge Distillation for Few-shot Node Classification

Xing Wei, Chunchun Chen, Rui Fan et al.

NEURIPS 2025posterarXiv:2510.10116

RUAGO: Effective and Practical Retain-Free Unlearning via Adversarial Attack and OOD Generator

SangYong Lee, Sangjun Chung, Simon Woo

NEURIPS 2025poster

Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity

Seonghoon Yu, Dongjun Nam, Dina Katabi et al.

NEURIPS 2025posterarXiv:2510.22480

Spik-NeRF: Spiking Neural Networks for Neural Radiance Fields

Gang Wan, Qinlong Lan, Zihan Li et al.

NEURIPS 2025poster

SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning

Yang Liu, Ming Ma, Xiaomin Yu et al.

NEURIPS 2025posterarXiv:2505.12448
19
citations

SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs

Ruyue Liu, Rong Yin, Xiangzhen Bo et al.

NEURIPS 2025posterarXiv:2510.01248
1
citations

Synergy Between the Strong and the Weak: Spiking Neural Networks are Inherently Self-Distillers

Yongqi Ding, Lin Zuo, Mengmeng Jing et al.

NEURIPS 2025oralarXiv:2510.07924

Token-Level Self-Play with Importance-Aware Guidance for Large Language Models

Tue Le, Hoang Tran, Quyen Tran et al.

NEURIPS 2025poster

Universal Cross-Tokenizer Distillation via Approximate Likelihood Matching

Benjamin Minixhofer, Ivan Vulić, Edoardo Maria Ponti

NEURIPS 2025posterarXiv:2503.20083
15
citations

Unlocking SLM Potential for Data Analysis Code Generation via Non-Parametric Knowledge Distillation

Jinyang Li, Jack Williams, Nick McKenna et al.

NEURIPS 2025poster

Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models

Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.

NEURIPS 2025posterarXiv:2507.07104
2
citations

Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation

Sungmin Cha, Kyunghyun Cho

NEURIPS 2025posterarXiv:2505.13111
4
citations