"transformer architecture" Papers

137 papers found • Page 2 of 3

A Comparative Study of Image Restoration Networks for General Backbone Network Design

Xiangyu Chen, Zheyuan Li, Yuandong Pu et al.

ECCV 2024posterarXiv:2310.11881
53
citations

ALERT-Transformer: Bridging Asynchronous and Synchronous Machine Learning for Real-Time Event-based Spatio-Temporal Data

Carmen Martin-Turrero, Maxence Bouvier, Manuel Breitenstein et al.

ICML 2024oral

An Incremental Unified Framework for Small Defect Inspection

Jiaqi Tang, Hao Lu, Xiaogang Xu et al.

ECCV 2024posterarXiv:2312.08917
21
citations

A Tale of Tails: Model Collapse as a Change of Scaling Laws

Elvis Dohmatob, Yunzhen Feng, Pu Yang et al.

ICML 2024poster

Attention Disturbance and Dual-Path Constraint Network for Occluded Person Re-identification

Jiaer Xia, Lei Tan, Pingyang Dai et al.

AAAI 2024paperarXiv:2303.10976
24
citations

Attention Meets Post-hoc Interpretability: A Mathematical Perspective

Gianluigi Lopardo, Frederic Precioso, Damien Garreau

ICML 2024poster

Auctionformer: A Unified Deep Learning Algorithm for Solving Equilibrium Strategies in Auction Games

Kexin Huang, Ziqian Chen, xue wang et al.

ICML 2024poster

AVSegFormer: Audio-Visual Segmentation with Transformer

Shengyi Gao, Zhe Chen, Guo Chen et al.

AAAI 2024paperarXiv:2307.01146

Breaking through the learning plateaus of in-context learning in Transformer

Jingwen Fu, Tao Yang, Yuwang Wang et al.

ICML 2024poster

Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA

Wentao Mo, Yang Liu

AAAI 2024paperarXiv:2402.15933
26
citations

CarFormer: Self-Driving with Learned Object-Centric Representations

Shadi Hamdan, Fatma Guney

ECCV 2024posterarXiv:2407.15843
11
citations

Converting Transformers to Polynomial Form for Secure Inference Over Homomorphic Encryption

Itamar Zimerman, Moran Baruch, Nir Drucker et al.

ICML 2024poster

Correlation Matching Transformation Transformers for UHD Image Restoration

Cong Wang, Jinshan Pan, Wei Wang et al.

AAAI 2024paperarXiv:2406.00629
57
citations

Distilling Morphology-Conditioned Hypernetworks for Efficient Universal Morphology Control

Zheng Xiong, Risto Vuorio, Jacob Beck et al.

ICML 2024poster

Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference

Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski et al.

ICML 2024poster

Exploring Transformer Extrapolation

Zhen Qin, Yiran Zhong, Hui Deng

AAAI 2024paperarXiv:2307.10156
12
citations

Gated Linear Attention Transformers with Hardware-Efficient Training

Songlin Yang, Bailin Wang, Yikang Shen et al.

ICML 2024poster

GeoMFormer: A General Architecture for Geometric Molecular Representation Learning

Tianlang Chen, Shengjie Luo, Di He et al.

ICML 2024poster

Graph External Attention Enhanced Transformer

Jianqing Liang, Min Chen, Jiye Liang

ICML 2024poster

GridFormer: Point-Grid Transformer for Surface Reconstruction

Shengtao Li, Ge Gao, Yudong Liu et al.

AAAI 2024paperarXiv:2401.02292

HDformer: A Higher

Dimensional Transformer for Detecting Diabetes Utilizing Long-Range Vascular Signals - Ella Lan

AAAI 2024paperarXiv:2303.11340
3
citations

How do Transformers Perform In-Context Autoregressive Learning ?

Michael Sander, Raja Giryes, Taiji Suzuki et al.

ICML 2024poster

How Smooth Is Attention?

Valérie Castin, Pierre Ablin, Gabriel Peyré

ICML 2024poster

How to Protect Copyright Data in Optimization of Large Language Models?

Timothy Chu, Zhao Song, Chiwun Yang

AAAI 2024paperarXiv:2308.12247

How Transformers Learn Causal Structure with Gradient Descent

Eshaan Nichani, Alex Damian, Jason Lee

ICML 2024poster

HybridGait: A Benchmark for Spatial-Temporal Cloth-Changing Gait Recognition with Hybrid Explorations

Yilan Dong, Chunlin Yu, Ruiyang Ha et al.

AAAI 2024paperarXiv:2401.00271
23
citations

Improving Transformers with Dynamically Composable Multi-Head Attention

Da Xiao, Qingye Meng, Shengping Li et al.

ICML 2024poster

In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization

Herilalaina Rakotoarison, Steven Adriaensen, Neeratyoy Mallik et al.

ICML 2024poster

In-Context Language Learning: Architectures and Algorithms

Ekin Akyürek, Bailin Wang, Yoon Kim et al.

ICML 2024poster

In-context Learning on Function Classes Unveiled for Transformers

Zhijie Wang, Bo Jiang, Shuai Li

ICML 2024poster

InsMapper: Exploring Inner-instance Information for Vectorized HD Mapping

Zhenhua Xu, Kwan-Yee K. Wong, Hengshuang ZHAO

ECCV 2024posterarXiv:2308.08543
18
citations

I/O Complexity of Attention, or How Optimal is FlashAttention?

Barna Saha, Christopher Ye

ICML 2024poster

Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification

Jimmy Lin, Junkai Li, Jiasi Gao et al.

AAAI 2024paperarXiv:2404.15279
1
citations

KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning

Junnan Liu, Qianren Mao, Weifeng Jiang et al.

ICML 2024poster

Learning Solution-Aware Transformers for Efficiently Solving Quadratic Assignment Problem

Zhentao Tan, Yadong Mu

ICML 2024poster

Longitudinal Targeted Minimum Loss-based Estimation with Temporal-Difference Heterogeneous Transformer

Toru Shirakawa, Yi Li, Yulun Wu et al.

ICML 2024oral

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

guangyan li, Yongqiang Tang, Wensheng Zhang

ICML 2024poster

MASTER: Market-Guided Stock Transformer for Stock Price Forecasting

Tong Li, Zhaoyang Liu, Yanyan Shen et al.

AAAI 2024paperarXiv:2312.15235
57
citations

Merging Multi-Task Models via Weight-Ensembling Mixture of Experts

Anke Tang, Li Shen, Yong Luo et al.

ICML 2024poster

Meta Evidential Transformer for Few-Shot Open-Set Recognition

Hitesh Sapkota, Krishna Neupane, Qi Yu

ICML 2024poster

MFTN: A Multi-scale Feature Transfer Network Based on IMatchFormer for Hyperspectral Image Super-Resolution

Shuying Huang, Mingyang Ren, Yong Yang et al.

ICML 2024poster

Modeling Language Tokens as Functionals of Semantic Fields

Zhengqi Pei, Anran Zhang, Shuhui Wang et al.

ICML 2024poster

MS-TIP: Imputation Aware Pedestrian Trajectory Prediction

Pranav Singh Chib, Achintya Nath, Paritosh Kabra et al.

ICML 2024poster

Multi-Agent Reinforcement Learning with Hierarchical Coordination for Emergency Responder Stationing

Amutheezan Sivagnanam, Ava Pettet, Hunter Lee et al.

ICML 2024poster

Neural Reasoning about Agents’ Goals, Preferences, and Actions

Matteo Bortoletto, Lei Shi, Andreas Bulling

AAAI 2024paperarXiv:2312.07122
8
citations

No More Shortcuts: Realizing the Potential of Temporal Self-Supervision

Ishan Rajendrakumar Dave, Simon Jenni, Mubarak Shah

AAAI 2024paperarXiv:2312.13008

OAT: Object-Level Attention Transformer for Gaze Scanpath Prediction

Yini Fang, Jingling Yu, Haozheng Zhang et al.

ECCV 2024posterarXiv:2407.13335
2
citations

Omni-Recon: Harnessing Image-based Rendering for General-Purpose Neural Radiance Fields

Yonggan Fu, Huaizhi Qu, Zhifan Ye et al.

ECCV 2024posterarXiv:2403.11131

PIDformer: Transformer Meets Control Theory

Tam Nguyen, Cesar Uribe, Tan Nguyen et al.

ICML 2024poster

Polynomial-based Self-Attention for Table Representation Learning

Jayoung Kim, Yehjin Shin, Jeongwhan Choi et al.

ICML 2024poster