"knowledge distillation" Papers

108 papers found • Page 2 of 3

AltDiffusion: A Multilingual Text-to-Image Diffusion Model

Fulong Ye, Guang Liu, Xinya Wu et al.

AAAI 2024paperarXiv:2308.09991

AMD: Automatic Multi-step Distillation of Large-scale Vision Models

Cheng Han, Qifan Wang, Sohail A Dianat et al.

ECCV 2024posterarXiv:2407.04208
14
citations

Bayesian Knowledge Distillation: A Bayesian Perspective of Distillation with Uncertainty Quantification

Luyang Fang, Yongkai Chen, Wenxuan Zhong et al.

ICML 2024poster

BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation

Zekai Xu, Kang You, Qinghai Guo et al.

ECCV 2024posterarXiv:2407.09083
13
citations

Boosting Residual Networks with Group Knowledge

Shengji Tang, Peng Ye, Baopu Li et al.

AAAI 2024paperarXiv:2308.13772
6
citations

Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection

QIJIE MO, Yipeng Gao, Shenghao Fu et al.

ECCV 2024posterarXiv:2407.11499
14
citations

Building Variable-Sized Models via Learngene Pool

Boyu Shi, Shiyu Xia, Xu Yang et al.

AAAI 2024paperarXiv:2312.05743
5
citations

COMBHelper: A Neural Approach to Reduce Search Space for Graph Combinatorial Problems

Hao Tian, Sourav Medya, Wei Ye

AAAI 2024paperarXiv:2312.09086
5
citations

Cooperative Knowledge Distillation: A Learner Agnostic Approach

Michael Livanos, Ian Davidson, Stephen Wong

AAAI 2024paperarXiv:2402.05942
1
citations

CSL: Class-Agnostic Structure-Constrained Learning for Segmentation including the Unseen

Hao Zhang, Fang Li, Lu Qi et al.

AAAI 2024paperarXiv:2312.05538
15
citations

Data-free Distillation of Diffusion Models with Bootstrapping

Jiatao Gu, Chen Wang, Shuangfei Zhai et al.

ICML 2024poster

DetKDS: Knowledge Distillation Search for Object Detectors

Lujun Li, Yufan Bao, Peijie Dong et al.

ICML 2024poster

DFD: Distilling the Feature Disparity Differently for Detectors

Kang Liu, Yingyi Zhang, Jingyun Zhang et al.

ICML 2024poster

Direct Distillation between Different Domains

Jialiang Tang, Shuo Chen, Gang Niu et al.

ECCV 2024posterarXiv:2401.06826
6
citations

Distilling Autoregressive Models to Obtain High-Performance Non-autoregressive Solvers for Vehicle Routing Problems with Faster Inference Speed

Yubin Xiao, Di Wang, Boyang Li et al.

AAAI 2024paperarXiv:2312.12469
31
citations

Distilling Knowledge from Large-Scale Image Models for Object Detection

Gang Li, Wenhai Wang, Xiang Li et al.

ECCV 2024poster
3
citations

DistiLLM: Towards Streamlined Distillation for Large Language Models

Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.

ICML 2024poster

DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition

Sijie Wang, Rui She, Qiyu Kang et al.

AAAI 2024paperarXiv:2312.10616

Do Topological Characteristics Help in Knowledge Distillation?

Jungeun Kim, Junwon You, Dongjin Lee et al.

ICML 2024poster

DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection

Yongchao Feng, Shiwei Li, Yingjie Gao et al.

ICML 2024poster

Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning

Yan Fan, Yu Wang, Pengfei Zhu et al.

AAAI 2024paperarXiv:2312.16409
11
citations

DεpS: Delayed ε-Shrinking for Faster Once-For-All Training

Aditya Annavajjala, Alind Khare, Animesh Agrawal et al.

ECCV 2024posterarXiv:2407.06167
1
citations

Embodied CoT Distillation From LLM To Off-the-shelf Agents

Wonje Choi, Woo Kyung Kim, Minjong Yoo et al.

ICML 2024poster

Enhancing Class-Imbalanced Learning with Pre-Trained Guidance through Class-Conditional Knowledge Distillation

Lan Li, Xin-Chun Li, Han-Jia Ye et al.

ICML 2024poster

EPSD: Early Pruning with Self-Distillation for Efficient Model Compression

Dong Chen, Ning Liu, Yichen Zhu et al.

AAAI 2024paperarXiv:2402.00084
8
citations

Expediting Contrastive Language-Image Pretraining via Self-Distilled Encoders

Bumsoo Kim, Jinhyung Kim, Yeonsik Jo et al.

AAAI 2024paperarXiv:2312.12659
5
citations

Federated Learning with Extremely Noisy Clients via Negative Distillation

Yang Lu, Lin Chen, Yonggang Zhang et al.

AAAI 2024paperarXiv:2312.12703
20
citations

Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning

Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu et al.

AAAI 2024paperarXiv:2312.12722
13
citations

From Coarse to Fine: Enable Comprehensive Graph Self-supervised Learning with Multi-granular Semantic Ensemble

Qianlong Wen, Mingxuan Ju, Zhongyu Ouyang et al.

ICML 2024poster

Generative Model-Based Feature Knowledge Distillation for Action Recognition

Guiqin Wang, Peng Zhao, Yanjiang Shi et al.

AAAI 2024paperarXiv:2312.08644
6
citations

Good Teachers Explain: Explanation-Enhanced Knowledge Distillation

Amin Parchami, Moritz Böhle, Sukrut Rao et al.

ECCV 2024posterarXiv:2402.03119
18
citations

Harmonizing knowledge Transfer in Neural Network with Unified Distillation

yaomin huang, faming Fang, Zaoming Yan et al.

ECCV 2024posterarXiv:2409.18565
1
citations

Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning

Jiangmeng Li, Yifan Jin, Hang Gao et al.

AAAI 2024paperarXiv:2312.14222

Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework

Qiongjie Cui, Huaijiang Sun, Bin Li et al.

ECCV 2024poster
1
citations

Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images

Jacopo Bonato, Marco Cotogni, Luigi Sabetta

ECCV 2024posterarXiv:2404.12922
19
citations

Keypoint-based Progressive Chain-of-Thought Distillation for LLMs

Kaituo Feng, Changsheng Li, Xiaolu Zhang et al.

ICML 2024poster

Knowledge Distillation with Auxiliary Variable

Bo Peng, zhen fang, Guangquan Zhang et al.

ICML 2024poster

LASS3D: Language-Assisted Semi-Supervised 3D Semantic Segmentation with Progressive Unreliable Data Exploitation

Jianan Li, Qiulei Dong

ECCV 2024poster
1
citations

Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval

Zhe Ma, Jianfeng Dong, Shouling Ji et al.

AAAI 2024paperarXiv:2312.09716
10
citations

MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis

Luyuan Xie, Manqing Lin, Tianyu Luan et al.

ICML 2024poster

MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices

Yang Zhao, Zhisheng Xiao, Yanwu Xu et al.

ECCV 2024posterarXiv:2311.16567
35
citations

Multi-scale Cross Distillation for Object Detection in Aerial Images

Kun Wang, Zi Wang, Zhang Li et al.

ECCV 2024poster
2
citations

Online Speculative Decoding

Xiaoxuan Liu, Lanxiang Hu, Peter Bailis et al.

ICML 2024poster

Overcoming Data and Model heterogeneities in Decentralized Federated Learning via Synthetic Anchors

Chun-Yin Huang, Kartik Srinivas, Xin Zhang et al.

ICML 2024poster

PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation

Jian Ma, Chen Chen, Qingsong Xie et al.

ECCV 2024posterarXiv:2311.17086
8
citations

Progressive Pretext Task Learning for Human Trajectory Prediction

Xiaotong Lin, Tianming Liang, Jian-Huang Lai et al.

ECCV 2024posterarXiv:2407.11588
27
citations

Recurrent Early Exits for Federated Learning with Heterogeneous Clients

Royson Lee, Javier Fernandez-Marques, Xu Hu et al.

ICML 2024poster

Rethinking Momentum Knowledge Distillation in Online Continual Learning

Nicolas MICHEL, Maorong Wang, Ling Xiao et al.

ICML 2024poster

Revisit the Essence of Distilling Knowledge through Calibration

Wen-Shu Fan, Su Lu, Xin-Chun Li et al.

ICML 2024poster

Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models

Yu-Chu Yu, Chi-Pin Huang, Jr-Jen Chen et al.

ECCV 2024posterarXiv:2403.09296
16
citations