2024 "knowledge distillation" Papers
58 papers found • Page 1 of 2
Adaptive Multi-task Learning for Few-shot Object Detection
Yan Ren, Yanling Li, Wai-Kin Adams Kong
Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap
Junhao Dong, Piotr Koniusz, Junxi Chen et al.
AltDiffusion: A Multilingual Text-to-Image Diffusion Model
Fulong Ye, Guang Liu, Xinya Wu et al.
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han, Qifan Wang, Sohail A Dianat et al.
Bayesian Knowledge Distillation: A Bayesian Perspective of Distillation with Uncertainty Quantification
Luyang Fang, Yongkai Chen, Wenxuan Zhong et al.
BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation
Zekai Xu, Kang You, Qinghai Guo et al.
Boosting Residual Networks with Group Knowledge
Shengji Tang, Peng Ye, Baopu Li et al.
Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection
QIJIE MO, Yipeng Gao, Shenghao Fu et al.
Building Variable-Sized Models via Learngene Pool
Boyu Shi, Shiyu Xia, Xu Yang et al.
COMBHelper: A Neural Approach to Reduce Search Space for Graph Combinatorial Problems
Hao Tian, Sourav Medya, Wei Ye
Cooperative Knowledge Distillation: A Learner Agnostic Approach
Michael Livanos, Ian Davidson, Stephen Wong
CSL: Class-Agnostic Structure-Constrained Learning for Segmentation including the Unseen
Hao Zhang, Fang Li, Lu Qi et al.
Data-free Distillation of Diffusion Models with Bootstrapping
Jiatao Gu, Chen Wang, Shuangfei Zhai et al.
DetKDS: Knowledge Distillation Search for Object Detectors
Lujun Li, Yufan Bao, Peijie Dong et al.
DFD: Distilling the Feature Disparity Differently for Detectors
Kang Liu, Yingyi Zhang, Jingyun Zhang et al.
Direct Distillation between Different Domains
Jialiang Tang, Shuo Chen, Gang Niu et al.
Distilling Autoregressive Models to Obtain High-Performance Non-autoregressive Solvers for Vehicle Routing Problems with Faster Inference Speed
Yubin Xiao, Di Wang, Boyang Li et al.
Distilling Knowledge from Large-Scale Image Models for Object Detection
Gang Li, Wenhai Wang, Xiang Li et al.
DistiLLM: Towards Streamlined Distillation for Large Language Models
Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.
DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition
Sijie Wang, Rui She, Qiyu Kang et al.
Do Topological Characteristics Help in Knowledge Distillation?
Jungeun Kim, Junwon You, Dongjin Lee et al.
DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection
Yongchao Feng, Shiwei Li, Yingjie Gao et al.
Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning
Yan Fan, Yu Wang, Pengfei Zhu et al.
DεpS: Delayed ε-Shrinking for Faster Once-For-All Training
Aditya Annavajjala, Alind Khare, Animesh Agrawal et al.
Embodied CoT Distillation From LLM To Off-the-shelf Agents
Wonje Choi, Woo Kyung Kim, Minjong Yoo et al.
Enhancing Class-Imbalanced Learning with Pre-Trained Guidance through Class-Conditional Knowledge Distillation
Lan Li, Xin-Chun Li, Han-Jia Ye et al.
EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
Dong Chen, Ning Liu, Yichen Zhu et al.
Expediting Contrastive Language-Image Pretraining via Self-Distilled Encoders
Bumsoo Kim, Jinhyung Kim, Yeonsik Jo et al.
Federated Learning with Extremely Noisy Clients via Negative Distillation
Yang Lu, Lin Chen, Yonggang Zhang et al.
Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning
Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu et al.
From Coarse to Fine: Enable Comprehensive Graph Self-supervised Learning with Multi-granular Semantic Ensemble
Qianlong Wen, Mingxuan Ju, Zhongyu Ouyang et al.
Generative Model-Based Feature Knowledge Distillation for Action Recognition
Guiqin Wang, Peng Zhao, Yanjiang Shi et al.
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami, Moritz Böhle, Sukrut Rao et al.
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
yaomin huang, faming Fang, Zaoming Yan et al.
Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning
Jiangmeng Li, Yifan Jin, Hang Gao et al.
Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework
Qiongjie Cui, Huaijiang Sun, Bin Li et al.
Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images
Jacopo Bonato, Marco Cotogni, Luigi Sabetta
Keypoint-based Progressive Chain-of-Thought Distillation for LLMs
Kaituo Feng, Changsheng Li, Xiaolu Zhang et al.
Knowledge Distillation with Auxiliary Variable
Bo Peng, zhen fang, Guangquan Zhang et al.
LASS3D: Language-Assisted Semi-Supervised 3D Semantic Segmentation with Progressive Unreliable Data Exploitation
Jianan Li, Qiulei Dong
Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval
Zhe Ma, Jianfeng Dong, Shouling Ji et al.
MH-pFLID: Model Heterogeneous personalized Federated Learning via Injection and Distillation for Medical Data Analysis
Luyuan Xie, Manqing Lin, Tianyu Luan et al.
MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices
Yang Zhao, Zhisheng Xiao, Yanwu Xu et al.
Multi-scale Cross Distillation for Object Detection in Aerial Images
Kun Wang, Zi Wang, Zhang Li et al.
Online Speculative Decoding
Xiaoxuan Liu, Lanxiang Hu, Peter Bailis et al.
Overcoming Data and Model heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Chun-Yin Huang, Kartik Srinivas, Xin Zhang et al.
PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation
Jian Ma, Chen Chen, Qingsong Xie et al.
Recurrent Early Exits for Federated Learning with Heterogeneous Clients
Royson Lee, Javier Fernandez-Marques, Xu Hu et al.
Rethinking Momentum Knowledge Distillation in Online Continual Learning
Nicolas MICHEL, Maorong Wang, Ling Xiao et al.
Revisit the Essence of Distilling Knowledge through Calibration
Wen-Shu Fan, Su Lu, Xin-Chun Li et al.