Poster "knowledge distillation" Papers

129 papers found • Page 2 of 3

Multi-order Orchestrated Curriculum Distillation for Model-Heterogeneous Federated Graph Learning

Guancheng Wan, Xu Cheng, Run Liu et al.

NeurIPS 2025poster

Neural Tangent Knowledge Distillation for Optical Convolutional Networks

Jinlin Xiang, Minho Choi, Yubo Zhang et al.

NeurIPS 2025posterarXiv:2508.08421
1
citations

On LLM Knowledge Distillation - A Comparison between Forward KL and Reverse KL

Yihan Cao, Yanbin Kang

ICLR 2025poster

On the creation of narrow AI: hierarchy and nonlocality of neural network skills

Eric Michaud, Asher Parker-Sartori, Max Tegmark

NeurIPS 2025posterarXiv:2505.15811
2
citations

PLD: A Choice-Theoretic List-Wise Knowledge Distillation

Ejafa Bassam, Dawei Zhu, Kaigui Bian

NeurIPS 2025posterarXiv:2506.12542

Point-SAM: Promptable 3D Segmentation Model for Point Clouds

Yuchen Zhou, Jiayuan Gu, Tung Chiang et al.

ICLR 2025posterarXiv:2406.17741
40
citations

Preference-driven Knowledge Distillation for Few-shot Node Classification

Xing Wei, Chunchun Chen, Rui Fan et al.

NeurIPS 2025posterarXiv:2510.10116

Prevalence of Negative Transfer in Continual Reinforcement Learning: Analyses and a Simple Baseline

Hongjoon Ahn, Jinu Hyeon, Youngmin Oh et al.

ICLR 2025poster
2
citations

RCTDistill: Cross-Modal Knowledge Distillation Framework for Radar-Camera 3D Object Detection with Temporal Fusion

Geonho Bang, Minjae Seong, Jisong Kim et al.

ICCV 2025posterarXiv:2509.17712

RUAGO: Effective and Practical Retain-Free Unlearning via Adversarial Attack and OOD Generator

SangYong Lee, Sangjun Chung, Simon Woo

NeurIPS 2025poster

Scale-aware Recognition in Satellite Images under Resource Constraints

Shreelekha Revankar, Cheng Perng Phoo, Utkarsh Kumar Mall et al.

ICLR 2025posterarXiv:2411.00210
1
citations

SDGOCC: Semantic and Depth-Guided Bird's-Eye View Transformation for 3D Multimodal Occupancy Prediction

ZaiPeng Duan, Xuzhong Hu, Pei An et al.

CVPR 2025posterarXiv:2507.17083
5
citations

Self-Updatable Large Language Models by Integrating Context into Model Parameters

Yu Wang, Xinshuang Liu, Xiusi Chen et al.

ICLR 2025posterarXiv:2410.00487
5
citations

SelKD: Selective Knowledge Distillation via Optimal Transport Perspective

Liangliang Shi, Zhengyan Shi, Junchi Yan

ICLR 2025poster
1
citations

Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity

Seonghoon Yu, Dongjun Nam, Dina Katabi et al.

NeurIPS 2025posterarXiv:2510.22480

Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting

Zilong (Ryan) Wang, Zifeng Wang, Long Le et al.

ICLR 2025posterarXiv:2407.08223
75
citations

Spik-NeRF: Spiking Neural Networks for Neural Radiance Fields

Gang Wan, Qinlong Lan, Zihan Li et al.

NeurIPS 2025poster

SSTAG: Structure-Aware Self-Supervised Learning Method for Text-Attributed Graphs

Ruyue Liu, Rong Yin, Xiangzhen Bo et al.

NeurIPS 2025posterarXiv:2510.01248
1
citations

Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning

Yuxiang Lu, Shengcao Cao, Yu-Xiong Wang

ICLR 2025posterarXiv:2410.14633
6
citations

Temporal Separation with Entropy Regularization for Knowledge Distillation in Spiking Neural Networks

Kairong Yu, Chengting Yu, Tianqing Zhang et al.

CVPR 2025posterarXiv:2503.03144
10
citations

Test-Time Ensemble via Linear Mode Connectivity: A Path to Better Adaptation

Byungjai Kim, Chanho Ahn, Wissam Baddar et al.

ICLR 2025poster
3
citations

Token-Level Self-Play with Importance-Aware Guidance for Large Language Models

Tue Le, Hoang Tran, Quyen Tran et al.

NeurIPS 2025poster

Tripartite Weight-Space Ensemble for Few-Shot Class-Incremental Learning

Juntae Lee, Munawar Hayat, Sungrack Yun

CVPR 2025posterarXiv:2506.15720
2
citations

TULIP: Token-length Upgraded CLIP

Ivona Najdenkoska, Mohammad Mahdi Derakhshani, Yuki Asano et al.

ICLR 2025posterarXiv:2410.10034
16
citations

UniCoTT: A Unified Framework for Structural Chain-of-Thought Distillation

Xianwei Zhuang, Zhihong Zhu, Zhichang Wang et al.

ICLR 2025poster
7
citations

Unlocking SLM Potential for Data Analysis Code Generation via Non-Parametric Knowledge Distillation

Jinyang Li, Jack Williams, Nick McKenna et al.

NeurIPS 2025poster

Vision‑Language‑Vision Auto‑Encoder: Scalable Knowledge Distillation from Diffusion Models

Tiezheng Zhang, Yitong Li, Yu-Cheng Chou et al.

NeurIPS 2025posterarXiv:2507.07104
2
citations

Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation

Sungmin Cha, Kyunghyun Cho

NeurIPS 2025posterarXiv:2505.13111
4
citations

AdaDistill: Adaptive Knowledge Distillation for Deep Face Recognition

Fadi Boutros, Vitomir Struc, Naser Damer

ECCV 2024posterarXiv:2407.01332
10
citations

Adaptive Multi-task Learning for Few-shot Object Detection

Yan Ren, Yanling Li, Wai-Kin Adams Kong

ECCV 2024poster

Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap

Junhao Dong, Piotr Koniusz, Junxi Chen et al.

ECCV 2024poster
10
citations

AMD: Automatic Multi-step Distillation of Large-scale Vision Models

Cheng Han, Qifan Wang, Sohail A Dianat et al.

ECCV 2024posterarXiv:2407.04208
14
citations

Bayesian Knowledge Distillation: A Bayesian Perspective of Distillation with Uncertainty Quantification

Luyang Fang, Yongkai Chen, Wenxuan Zhong et al.

ICML 2024poster

BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation

Zekai Xu, Kang You, Qinghai Guo et al.

ECCV 2024posterarXiv:2407.09083
13
citations

Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection

QIJIE MO, Yipeng Gao, Shenghao Fu et al.

ECCV 2024posterarXiv:2407.11499
14
citations

Data-free Distillation of Diffusion Models with Bootstrapping

Jiatao Gu, Chen Wang, Shuangfei Zhai et al.

ICML 2024poster

DetKDS: Knowledge Distillation Search for Object Detectors

Lujun Li, Yufan Bao, Peijie Dong et al.

ICML 2024poster

DFD: Distilling the Feature Disparity Differently for Detectors

Kang Liu, Yingyi Zhang, Jingyun Zhang et al.

ICML 2024poster

Direct Distillation between Different Domains

Jialiang Tang, Shuo Chen, Gang Niu et al.

ECCV 2024posterarXiv:2401.06826
6
citations

Distilling Knowledge from Large-Scale Image Models for Object Detection

Gang Li, Wenhai Wang, Xiang Li et al.

ECCV 2024poster
3
citations

DistiLLM: Towards Streamlined Distillation for Large Language Models

Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.

ICML 2024posterarXiv:2402.03898

Do Topological Characteristics Help in Knowledge Distillation?

Jungeun Kim, Junwon You, Dongjin Lee et al.

ICML 2024poster

DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection

Yongchao Feng, Shiwei Li, Yingjie Gao et al.

ICML 2024posterarXiv:2311.10437

DSMix: Distortion-Induced Saliency Map Based Pre-training for No-Reference Image Quality Assessment

Jinsong Shi, Jinsong Shi, Xiaojiang Peng et al.

ECCV 2024poster
5
citations

DεpS: Delayed ε-Shrinking for Faster Once-For-All Training

Aditya Annavajjala, Alind Khare, Animesh Agrawal et al.

ECCV 2024posterarXiv:2407.06167
1
citations

Embodied CoT Distillation From LLM To Off-the-shelf Agents

Wonje Choi, Woo Kyung Kim, Minjong Yoo et al.

ICML 2024posterarXiv:2412.11499

Enhanced Sparsification via Stimulative Training

Shengji Tang, Weihao Lin, Hancheng Ye et al.

ECCV 2024posterarXiv:2403.06417
2
citations

Enhancing Class-Imbalanced Learning with Pre-Trained Guidance through Class-Conditional Knowledge Distillation

Lan Li, Xin-Chun Li, Han-Jia Ye et al.

ICML 2024poster

From Coarse to Fine: Enable Comprehensive Graph Self-supervised Learning with Multi-granular Semantic Ensemble

Qianlong Wen, Mingxuan Ju, Zhongyu Ouyang et al.

ICML 2024poster

Good Teachers Explain: Explanation-Enhanced Knowledge Distillation

Amin Parchami, Moritz Böhle, Sukrut Rao et al.

ECCV 2024posterarXiv:2402.03119
18
citations