Paper "knowledge distillation" Papers
22 papers found
AltDiffusion: A Multilingual Text-to-Image Diffusion Model
Fulong Ye, Guang Liu, Xinya Wu et al.
Boosting Residual Networks with Group Knowledge
Shengji Tang, Peng Ye, Baopu Li et al.
Building Variable-Sized Models via Learngene Pool
Boyu Shi, Shiyu Xia, Xu Yang et al.
COMBHelper: A Neural Approach to Reduce Search Space for Graph Combinatorial Problems
Hao Tian, Sourav Medya, Wei Ye
Cooperative Knowledge Distillation: A Learner Agnostic Approach
Michael Livanos, Ian Davidson, Stephen Wong
CSL: Class-Agnostic Structure-Constrained Learning for Segmentation including the Unseen
Hao Zhang, Fang Li, Lu Qi et al.
Distilling Autoregressive Models to Obtain High-Performance Non-autoregressive Solvers for Vehicle Routing Problems with Faster Inference Speed
Yubin Xiao, Di Wang, Boyang Li et al.
DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition
Sijie Wang, Rui She, Qiyu Kang et al.
Dynamic Sub-graph Distillation for Robust Semi-supervised Continual Learning
Yan Fan, Yu Wang, Pengfei Zhu et al.
EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
Dong Chen, Ning Liu, Yichen Zhu et al.
Expediting Contrastive Language-Image Pretraining via Self-Distilled Encoders
Bumsoo Kim, Jinhyung Kim, Yeonsik Jo et al.
Federated Learning with Extremely Noisy Clients via Negative Distillation
Yang Lu, Lin Chen, Yonggang Zhang et al.
Fine-Grained Knowledge Selection and Restoration for Non-exemplar Class Incremental Learning
Authors: Jiang-Tian Zhai, Xialei Liu, Lu Yu et al.
Generative Model-Based Feature Knowledge Distillation for Action Recognition
Guiqin Wang, Peng Zhao, Yanjiang Shi et al.
Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive Learning
Jiangmeng Li, Yifan Jin, Hang Gao et al.
Let All Be Whitened: Multi-Teacher Distillation for Efficient Visual Retrieval
Zhe Ma, Jianfeng Dong, Shouling Ji et al.
SimDistill: Simulated Multi-Modal Distillation for BEV 3D Object Detection
Haimei Zhao, Qiming Zhang, Shanshan Zhao et al.
Simple Image-Level Classification Improves Open-Vocabulary Object Detection
Ruohuan Fang, Guansong Pang, Xiao Bai
SpikingBERT: Distilling BERT to Train Spiking Language Models Using Implicit Differentiation
Malyaban Bal, Abhronil Sengupta
Summarizing Stream Data for Memory-Constrained Online Continual Learning
Jianyang Gu, Kai Wang, Wei Jiang et al.
Sunshine to Rainstorm: Cross-Weather Knowledge Distillation for Robust 3D Object Detection
Xun Huang, Hai Wu, Xin Li et al.
Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation
Hyunjune Shin, Dong-Wan Choi