"attention mechanism" Papers

166 papers found • Page 3 of 4

HENet: Hybrid Encoding for End-to-end Multi-task 3D Perception from Multi-view Cameras

Zhongyu Xia, ZhiWei Lin, Xinhao Wang et al.

ECCV 2024posterarXiv:2404.02517
19
citations

Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables

Haisong Gong, Weizhi Xu, Shu Wu et al.

AAAI 2024paperarXiv:2402.13028
16
citations

Hierarchical Aligned Multimodal Learning for NER on Tweet Posts

Peipei Liu, Hong Li, Yimo Ren et al.

AAAI 2024paperarXiv:2305.08372
8
citations

High-Order Contrastive Learning with Fine-grained Comparative Levels for Sparse Ordinal Tensor Completion

Yu Dai, Junchen Shen, Zijie Zhai et al.

ICML 2024poster

How Smooth Is Attention?

Valérie Castin, Pierre Ablin, Gabriel Peyré

ICML 2024poster

How to Protect Copyright Data in Optimization of Large Language Models?

Timothy Chu, Zhao Song, Chiwun Yang

AAAI 2024paperarXiv:2308.12247

How Transformers Learn Causal Structure with Gradient Descent

Eshaan Nichani, Alex Damian, Jason Lee

ICML 2024poster

IIANet: An Intra- and Inter-Modality Attention Network for Audio-Visual Speech Separation

Kai Li, Runxuan Yang, Fuchun Sun et al.

ICML 2024oral

In-context Convergence of Transformers

Yu Huang, Yuan Cheng, Yingbin LIANG

ICML 2024poster

In-Context Language Learning: Architectures and Algorithms

Ekin Akyürek, Bailin Wang, Yoon Kim et al.

ICML 2024poster

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Shiqi Chen, Miao Xiong, Junteng Liu et al.

ICML 2024poster

InfoNet: Neural Estimation of Mutual Information without Test-Time Optimization

Zhengyang Hu, Song Kang, Qunsong Zeng et al.

ICML 2024poster

InsMapper: Exploring Inner-instance Information for Vectorized HD Mapping

Zhenhua Xu, Kwan-Yee K. Wong, Hengshuang ZHAO

ECCV 2024posterarXiv:2308.08543
18
citations

InterpreTabNet: Distilling Predictive Signals from Tabular Data by Salient Feature Interpretation

Jacob Si, Wendy Yusi Cheng, Michael Cooper et al.

ICML 2024spotlight

I/O Complexity of Attention, or How Optimal is FlashAttention?

Barna Saha, Christopher Ye

ICML 2024poster

Iterative Search Attribution for Deep Neural Networks

Zhiyu Zhu, Huaming Chen, Xinyi Wang et al.

ICML 2024poster

KG-TREAT: Pre-training for Treatment Effect Estimation by Synergizing Patient Data with Knowledge Graphs

Ruoqi Liu, Lingfei Wu, Ping Zhang

AAAI 2024paperarXiv:2403.03791

KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning

Junnan Liu, Qianren Mao, Weifeng Jiang et al.

ICML 2024poster

LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions

Victor Agostinelli III, Sanghyun Hong, Lizhong Chen

ICML 2024poster

Learning Solution-Aware Transformers for Efficiently Solving Quadratic Assignment Problem

Zhentao Tan, Yadong Mu

ICML 2024poster

Make Your ViT-based Multi-view 3D Detectors Faster via Token Compression

Dingyuan Zhang, Dingkang Liang, Zichang Tan et al.

ECCV 2024posterarXiv:2409.00633
4
citations

Memory Efficient Neural Processes via Constant Memory Attention Block

Leo Feng, Frederick Tung, Hossein Hajimirsadeghi et al.

ICML 2024poster

Meta Evidential Transformer for Few-Shot Open-Set Recognition

Hitesh Sapkota, Krishna Neupane, Qi Yu

ICML 2024poster

Mobile Attention: Mobile-Friendly Linear-Attention for Vision Transformers

Zhiyu Yao, Jian Wang, Haixu Wu et al.

ICML 2024poster

Multi-Architecture Multi-Expert Diffusion Models

Yunsung Lee, Jin-Young Kim, Hyojun Go et al.

AAAI 2024paperarXiv:2306.04990
39
citations

MultiMax: Sparse and Multi-Modal Attention Learning

Yuxuan Zhou, Mario Fritz, Margret Keuper

ICML 2024poster

OneVOS: Unifying Video Object Segmentation with All-in-One Transformer Framework

Wanyun Li, Pinxue Guo, Xinyu Zhou et al.

ECCV 2024posterarXiv:2403.08682
11
citations

Outlier-Efficient Hopfield Layers for Large Transformer-Based Models

Jerry Yao-Chieh Hu, Pei-Hsuan Chang, Haozheng Luo et al.

ICML 2024posterarXiv:2404.03828

Parameter-Efficient Fine-Tuning with Controls

Chi Zhang, Jingpu Cheng, Yanyu Xu et al.

ICML 2024poster

PIDformer: Transformer Meets Control Theory

Tam Nguyen, Cesar Uribe, Tan Nguyen et al.

ICML 2024poster

PinNet: Pinpoint Instructive Information for Retrieval Augmented Code-to-Text Generation

Han Fu, Jian Tan, Pinhan Zhang et al.

ICML 2024poster

Positional Knowledge is All You Need: Position-induced Transformer (PiT) for Operator Learning

Junfeng CHEN, Kailiang Wu

ICML 2024poster

Prompting a Pretrained Transformer Can Be a Universal Approximator

Aleksandar Petrov, Phil Torr, Adel Bibi

ICML 2024poster

Prospector Heads: Generalized Feature Attribution for Large Models & Data

Gautam Machiraju, Alexander Derry, Arjun Desai et al.

ICML 2024poster

Relaxing the Accurate Imputation Assumption in Doubly Robust Learning for Debiased Collaborative Filtering

Haoxuan Li, Chunyuan Zheng, Shuyi Wang et al.

ICML 2024spotlight

Repeat After Me: Transformers are Better than State Space Models at Copying

Samy Jelassi, David Brandfonbrener, Sham Kakade et al.

ICML 2024poster

S2WAT: Image Style Transfer via Hierarchical Vision Transformer Using Strips Window Attention

Chiyu Zhang, Xiaogang Xu, Lei Wang et al.

AAAI 2024paperarXiv:2210.12381
46
citations

SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention

Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov et al.

ICML 2024poster

ScanERU: Interactive 3D Visual Grounding Based on Embodied Reference Understanding

Ziyang Lu, Yunqiang Pei, Guoqing Wang et al.

AAAI 2024paperarXiv:2303.13186

Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes

Yingyi Chen, Qinghua Tao, Francesco Tonin et al.

ICML 2024poster

Semantic-Aware Data Augmentation for Text-to-Image Synthesis

Zhaorui Tan, Xi Yang, Kaizhu Huang

AAAI 2024paperarXiv:2312.07951
4
citations

Semantic Lens: Instance-Centric Semantic Alignment for Video Super-resolution

AAAI 2024paperarXiv:2312.07823

SeTformer Is What You Need for Vision and Language

Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger et al.

AAAI 2024paperarXiv:2401.03540
7
citations

Simple linear attention language models balance the recall-throughput tradeoff

Simran Arora, Sabri Eyuboglu, Michael Zhang et al.

ICML 2024spotlight

SparQ Attention: Bandwidth-Efficient LLM Inference

Luka Ribar, Ivan Chelombiev, Luke Hudlass-Galley et al.

ICML 2024poster

Sparse and Structured Hopfield Networks

Saúl Santos, Vlad Niculae, Daniel McNamee et al.

ICML 2024spotlight

SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization

Xixu Hu, Runkai Zheng, Jindong Wang et al.

ECCV 2024posterarXiv:2402.03317
5
citations

SpikingBERT: Distilling BERT to Train Spiking Language Models Using Implicit Differentiation

Malyaban Bal, Abhronil Sengupta

AAAI 2024paperarXiv:2308.10873
70
citations

StableMask: Refining Causal Masking in Decoder-only Transformer

Qingyu Yin, Xuzheng He, Xiang Zhuang et al.

ICML 2024poster

Statistical Test for Attention Maps in Vision Transformers

Tomohiro Shiraishi, Daiki Miwa, Teruyuki Katsuoka et al.

ICML 2024poster