NeurIPS "knowledge distillation" Papers
21 papers found
ATLAS: Autoformalizing Theorems through Lifting, Augmentation, and Synthesis of Data
Xiaoyang Liu, Kangjie Bao, Jiashuo Zhang et al.
Better Estimation of the Kullback--Leibler Divergence Between Language Models
Afra Amini, Tim Vieira, Ryan Cotterell
Continuous Concepts Removal in Text-to-image Diffusion Models
Tingxu Han, Weisong Sun, Yanrong Hu et al.
Distillation Robustifies Unlearning
Bruce W, Lee, Addie Foote, Alex Infanger et al.
DKDR: Dynamic Knowledge Distillation for Reliability in Federated Learning
Yueyang Yuan, Wenke Huang, Guancheng Wan et al.
HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation
Xiaolong Xu, Xudong Zhao, Haolong Xiang et al.
Interaction-Centric Knowledge Infusion and Transfer for Open Vocabulary Scene Graph Generation
Lin Li, Chuhan ZHANG, Dong Zhang et al.
KINDLE: Knowledge-Guided Distillation for Prior-Free Gene Regulatory Network Inference
Rui Peng, Yuchen Lu, Qichen Sun et al.
Knowledge Distillation of Uncertainty using Deep Latent Factor Model
Sehyun Park, Jongjin Lee, Yunseop Shin et al.
Learning Task-Agnostic Representations through Multi-Teacher Distillation
Philippe Formont, Maxime Darrin, Banafsheh Karimian et al.
Multi-order Orchestrated Curriculum Distillation for Model-Heterogeneous Federated Graph Learning
Guancheng Wan, Xu Cheng, Run Liu et al.
Neural Tangent Knowledge Distillation for Optical Convolutional Networks
Jinlin Xiang, Minho Choi, Yubo Zhang et al.
On the creation of narrow AI: hierarchy and nonlocality of neural network skills
Eric Michaud, Asher Parker-Sartori, Max Tegmark
PLD: A Choice-Theoretic List-Wise Knowledge Distillation
Ejafa Bassam, Dawei Zhu, Kaigui Bian
Preference-driven Knowledge Distillation for Few-shot Node Classification
Xing Wei, Chunchun Chen, Rui Fan et al.
RUAGO: Effective and Practical Retain-Free Unlearning via Adversarial Attack and OOD Generator
SangYong Lee, Sangjun Chung, Simon Woo
Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity
Seonghoon Yu, Dongjun Nam, Dina Katabi et al.
Spik-NeRF: Spiking Neural Networks for Neural Radiance Fields
Gang Wan, Qinlong Lan, Zihan Li et al.
SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs
Ruyue Liu, Rong Yin, Xiangzhen Bo et al.
Token-Level Self-Play with Importance-Aware Guidance for Large Language Models
Tue Le, Hoang Tran, Quyen Tran et al.
Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models
Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.