Poster "self-attention mechanism" Papers
18 papers found
Colors See Colors Ignore: Clothes Changing ReID with Color Disentanglement
Priyank Pathak, Yogesh Rawat
Efficient Concertormer for Image Deblurring and Beyond
Pin-Hung Kuo, Jinshan Pan, Shao-Yi Chien et al.
Hydra-SGG: Hybrid Relation Assignment for One-stage Scene Graph Generation
Minghan Chen, Guikun Chen, Wenguan Wang et al.
IFORMER: INTEGRATING CONVNET AND TRANSFORMER FOR MOBILE APPLICATION
Chuanyang Zheng
Quantized Spike-driven Transformer
Xuerui Qiu, Malu Zhang, Jieyuan Zhang et al.
Self-Attention-Based Contextual Modulation Improves Neural System Identification
Isaac Lin, Tianye Wang, Shang Gao et al.
Sim-DETR: Unlock DETR for Temporal Sentence Grounding
Jiajin Tang, Zhengxuan Wei, Yuchen Zhu et al.
Spiking Transformer with Spatial-Temporal Attention
Donghyun Lee, Yuhang Li, Youngeun Kim et al.
SynCL: A Synergistic Training Strategy with Instance-Aware Contrastive Learning for End-to-End Multi-Camera 3D Tracking
Shubo Lin, Yutong Kou, Zirui Wu et al.
Systematic Outliers in Large Language Models
Yongqi An, Xu Zhao, Tao Yu et al.
Fine-grained Local Sensitivity Analysis of Standard Dot-Product Self-Attention
Aaron Havens, Alexandre Araujo, Huan Zhang et al.
From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers
Muhammed Emrullah Ildiz, Yixiao HUANG, Yingcong Li et al.
Polynomial-based Self-Attention for Table Representation Learning
Jayoung Kim, Yehjin Shin, Jeongwhan Choi et al.
PolyRoom: Room-aware Transformer for Floorplan Reconstruction
Yuzhou Liu, Lingjie Zhu, Xiaodong Ma et al.
Self-attention Networks Localize When QK-eigenspectrum Concentrates
Han Bao, Ryuichiro Hataya, Ryo Karakida
SMFANet: A Lightweight Self-Modulation Feature Aggregation Network for Efficient Image Super-Resolution
mingjun zheng, Long Sun, Jiangxin Dong et al.
Towards Causal Foundation Model: on Duality between Optimal Balancing and Attention
Jiaqi Zhang, Joel Jennings, Agrin Hilmkil et al.
What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Hongkang Li, Meng Wang, Tengfei Ma et al.