"attention mechanism" Papers
287 papers found • Page 5 of 6
FaceCoresetNet: Differentiable Coresets for Face Set Recognition
Gil Shapira, Yosi Keller
FALIP: Visual Prompt as Foveal Attention Boosts CLIP Zero-Shot Performance
Jiedong Zhuang, Jiaqi Hu, Lianrui Mu et al.
Free-Editor: Zero-shot Text-driven 3D Scene Editing
Md Nazmul Karim, Hasan Iqbal, Umar Khalid et al.
Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks
Xuerui Qiu, Rui-Jie Zhu, Yuhong Chou et al.
GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction
Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang et al.
Generative Enzyme Design Guided by Functionally Important Sites and Small-Molecule Substrates
Zhenqiao Song, Yunlong Zhao, Wenxian Shi et al.
Graph-based Forecasting with Missing Data through Spatiotemporal Downsampling
Ivan Marisca, Cesare Alippi, Filippo Maria Bianchi
Graph Context Transformation Learning for Progressive Correspondence Pruning
Junwen Guo, Guobao Xiao, Shiping Wang et al.
Graph External Attention Enhanced Transformer
Jianqing Liang, Min Chen, Jiye Liang
Grid-Attention: Enhancing Computational Efficiency of Large Vision Models without Fine-Tuning
Pengyu Li, Biao Wang, Tianchu Guo et al.
GridFormer: Point-Grid Transformer for Surface Reconstruction
Shengtao Li, Ge Gao, Yudong Liu et al.
HandDAGT: A Denoising Adaptive Graph Transformer for 3D Hand Pose Estimation
WENCAN CHENG, Eun-Ji Kim, Jong Hwan Ko
HENet: Hybrid Encoding for End-to-end Multi-task 3D Perception from Multi-view Cameras
Zhongyu Xia, ZhiWei Lin, Xinhao Wang et al.
Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables
Haisong Gong, Weizhi Xu, Shu Wu et al.
Hierarchical Aligned Multimodal Learning for NER on Tweet Posts
Peipei Liu, Hong Li, Yimo Ren et al.
High-Order Contrastive Learning with Fine-grained Comparative Levels for Sparse Ordinal Tensor Completion
Yu Dai, Junchen Shen, Zijie Zhai et al.
How Smooth Is Attention?
Valérie Castin, Pierre Ablin, Gabriel Peyré
How to Protect Copyright Data in Optimization of Large Language Models?
Timothy Chu, Zhao Song, Chiwun Yang
How Transformers Learn Causal Structure with Gradient Descent
Eshaan Nichani, Alex Damian, Jason Lee
IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation
Kai Li, Runxuan Yang, Fuchun Sun et al.
In-context Convergence of Transformers
Yu Huang, Yuan Cheng, Yingbin LIANG
In-Context Language Learning: Architectures and Algorithms
Ekin Akyürek, Bailin Wang, Yoon Kim et al.
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen, Miao Xiong, Junteng Liu et al.
InfoNet: Neural Estimation of Mutual Information without Test-Time Optimization
Zhengyang Hu, Song Kang, Qunsong Zeng et al.
InsMapper: Exploring Inner-instance Information for Vectorized HD Mapping
Zhenhua Xu, Kwan-Yee K. Wong, Hengshuang ZHAO
InterpreTabNet: Distilling Predictive Signals from Tabular Data by Salient Feature Interpretation
Jacob Si, Wendy Yusi Cheng, Michael Cooper et al.
I/O Complexity of Attention, or How Optimal is FlashAttention?
Barna Saha, Christopher Ye
Iterative Search Attribution for Deep Neural Networks
Zhiyu Zhu, Huaming Chen, Xinyi Wang et al.
KG-TREAT: Pre-training for Treatment Effect Estimation by Synergizing Patient Data with Knowledge Graphs
Ruoqi Liu, Lingfei Wu, Ping Zhang
KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning
Junnan Liu, Qianren Mao, Weifeng Jiang et al.
Large Motion Model for Unified Multi-Modal Motion Generation
Mingyuan Zhang, Daisheng Jin, Chenyang Gu et al.
LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions
Victor Agostinelli III, Sanghyun Hong, Lizhong Chen
Learning Solution-Aware Transformers for Efficiently Solving Quadratic Assignment Problem
Zhentao Tan, Yadong Mu
Learning with Unmasked Tokens Drives Stronger Vision Learners
Taekyung Kim, Sanghyuk Chun, Byeongho Heo et al.
Make Your ViT-based Multi-view 3D Detectors Faster via Token Compression
Dingyuan Zhang, Dingkang Liang, Zichang Tan et al.
Memory Efficient Neural Processes via Constant Memory Attention Block
Leo Feng, Frederick Tung, Hossein Hajimirsadeghi et al.
Merging and Splitting Diffusion Paths for Semantically Coherent Panoramas
Fabio Quattrini, Vittorio Pippi, Silvia Cascianelli et al.
Meta Evidential Transformer for Few-Shot Open-Set Recognition
Hitesh Sapkota, Krishna Neupane, Qi Yu
Mobile Attention: Mobile-Friendly Linear-Attention for Vision Transformers
Zhiyu Yao, Jian Wang, Haixu Wu et al.
Multi-Architecture Multi-Expert Diffusion Models
Yunsung Lee, Jin-Young Kim, Hyojun Go et al.
MultiMax: Sparse and Multi-Modal Attention Learning
Yuxuan Zhou, Mario Fritz, Margret Keuper
OneVOS: Unifying Video Object Segmentation with All-in-One Transformer Framework
Wanyun Li, Pinxue Guo, Xinyu Zhou et al.
Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Haozheng Luo et al.
Parameter-Efficient Fine-Tuning with Controls
Chi Zhang, Jingpu Cheng, Yanyu Xu et al.
PIDformer: Transformer Meets Control Theory
Tam Nguyen, Cesar Uribe, Tan Nguyen et al.
PinNet: Pinpoint Instructive Information for Retrieval Augmented Code-to-Text Generation
Han Fu, Jian Tan, Pinhan Zhang et al.
PosFormer: Recognizing Complex Handwritten Mathematical Expression with Position Forest Transformer
Tongkun Guan, Chengyu Lin, Wei Shen et al.
Positional Knowledge is All You Need: Position-induced Transformer (PiT) for Operator Learning
Junfeng CHEN, Kailiang Wu
Prompting a Pretrained Transformer Can Be a Universal Approximator
Aleksandar Petrov, Phil Torr, Adel Bibi
Prospector Heads: Generalized Feature Attribution for Large Models & Data
Gautam Machiraju, Alexander Derry, Arjun Desai et al.