ECCV Poster "knowledge distillation" Papers
30 papers found
AdaDistill: Adaptive Knowledge Distillation for Deep Face Recognition
Fadi Boutros, Vitomir Struc, Naser Damer
Adaptive Multi-task Learning for Few-shot Object Detection
Yan Ren, Yanling Li, Wai-Kin Adams Kong
Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap
Junhao Dong, Piotr Koniusz, Junxi Chen et al.
AMD: Automatic Multi-step Distillation of Large-scale Vision Models
Cheng Han, Qifan Wang, Sohail A Dianat et al.
BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation
Zekai Xu, Kang You, Qinghai Guo et al.
Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection
QIJIE MO, Yipeng Gao, Shenghao Fu et al.
Direct Distillation between Different Domains
Jialiang Tang, Shuo Chen, Gang Niu et al.
Distilling Knowledge from Large-Scale Image Models for Object Detection
Gang Li, Wenhai Wang, Xiang Li et al.
DSMix: Distortion-Induced Saliency Map Based Pre-training for No-Reference Image Quality Assessment
Jinsong Shi, Jinsong Shi, Xiaojiang Peng et al.
DεpS: Delayed ε-Shrinking for Faster Once-For-All Training
Aditya Annavajjala, Alind Khare, Animesh Agrawal et al.
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami, Moritz Böhle, Sukrut Rao et al.
Harmonizing knowledge Transfer in Neural Network with Unified Distillation
yaomin huang, faming Fang, Zaoming Yan et al.
Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework
Qiongjie Cui, Huaijiang Sun, Bin Li et al.
Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-Of-Distribution Images
Jacopo Bonato, Marco Cotogni, Luigi Sabetta
LASS3D: Language-Assisted Semi-Supervised 3D Semantic Segmentation with Progressive Unreliable Data Exploitation
Jianan Li, Qiulei Dong
Learning Modality-agnostic Representation for Semantic Segmentation from Any Modalities
Xu Zheng, Yuanhuiyi Lyu, LIN WANG
LEROjD: Lidar Extended Radar-Only Object Detection
Patrick Palmer, Martin Krüger, Stefan Schütte et al.
LiDAR-based All-weather 3D Object Detection via Prompting and Distilling 4D Radar
Yujeong Chae, HYEONSEONG KIM, Changgyoon Oh et al.
Markov Knowledge Distillation: Make Nasty Teachers trained by Self-undermining Knowledge Distillation Fully Distillable
En-Hui Yang, Linfeng Ye
Mitigating Background Shift in Class-Incremental Semantic Segmentation
gilhan Park, WonJun Moon, SuBeen Lee et al.
MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices
Yang Zhao, Zhisheng Xiao, Yanwu Xu et al.
Multi-scale Cross Distillation for Object Detection in Aerial Images
Kun Wang, Zi Wang, Zhang Li et al.
Open Vocabulary 3D Scene Understanding via Geometry Guided Self-Distillation
Pengfei Wang, Yuxi Wang, Shuai Li et al.
PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation
Jian Ma, Chen Chen, Qingsong Xie et al.
Progressive Pretext Task Learning for Human Trajectory Prediction
Xiaotong Lin, Tianming Liang, Jian-Huang Lai et al.
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models
Yu-Chu Yu, Chi-Pin Huang, Jr-Jen Chen et al.
Self-Adapting Large Visual-Language Models to Edge Devices across Visual Modalities
Kaiwen Cai, ZheKai Duan, Gaowen Liu et al.
Self-Cooperation Knowledge Distillation for Novel Class Discovery
Yuzheng Wang, Zhaoyu Chen, Dingkang Yang et al.
Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs
Camillo Quattrocchi, Antonino Furnari, Daniele Di Mauro et al.
UNIC: Universal Classification Models via Multi-teacher Distillation
Yannis Kalantidis, Larlus Diane, Mert Bulent SARIYILDIZ et al.