ICML Poster "vision transformers" Papers
13 papers found
Adapting Pretrained ViTs with Convolution Injector for Visuo-Motor Control
Dongyoon Hwang, Byungkun Lee, Hojoon Lee et al.
ICML 2024posterarXiv:2406.06072
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer et al.
ICML 2024posterarXiv:2402.05602
Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption
Itamar Zimerman, Moran Baruch, Nir Drucker et al.
ICML 2024posterarXiv:2311.08610
Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks
Mikkel Jordahn, Pablo Olmos
ICML 2024posterarXiv:2405.01196
Fine-grained Local Sensitivity Analysis of Standard Dot-Product Self-Attention
Aaron Havens, Alexandre Araujo, Huan Zhang et al.
ICML 2024poster
KernelWarehouse: Rethinking the Design of Dynamic Convolution
Chao Li, Anbang Yao
ICML 2024posterarXiv:2406.07879
Mobile Attention: Mobile-Friendly Linear-Attention for Vision Transformers
Zhiyu Yao, Jian Wang, Haixu Wu et al.
ICML 2024poster
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang, Shiwei Tan, Hao Wang
ICML 2024posterarXiv:2406.12649
Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness
Honghao Chen, Zhang Yurong, xiaokun Feng et al.
ICML 2024posterarXiv:2407.08972
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications
Zixuan Hu, Yongxian Wei, Li Shen et al.
ICML 2024posterarXiv:2510.27186
Sub-token ViT Embedding via Stochastic Resonance Transformers
Dong Lao, Yangchao Wu, Tian Yu Liu et al.
ICML 2024posterarXiv:2310.03967
Vision Transformers as Probabilistic Expansion from Learngene
Qiufeng Wang, Xu Yang, Haokun Chen et al.
ICML 2024poster
xT: Nested Tokenization for Larger Context in Large Images
Ritwik Gupta, Shufan Li, Tyler Zhu et al.
ICML 2024posterarXiv:2403.01915