Poster "positional encoding" Papers

12 papers found

LEDiT: Your Length-Extrapolatable Diffusion Transformer without Positional Encoding

Shen Zhang, Siyuan Liang, Yaning Tan et al.

NeurIPS 2025posterarXiv:2503.04344
1
citations

Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding

Qian Ma, Ruoxiang Xu, Yongqiang Cai

NeurIPS 2025posterarXiv:2511.06376

Why RoPE Struggles to Maintain Long-Term Decay in Long Sequences?

Wei Shen, Chao Yin, Yuliang Liu et al.

ICLR 2025poster

Few-shot NeRF by Adaptive Rendering Loss Regularization

Qingshan Xu, Xuanyu Yi, Jianyao Xu et al.

ECCV 2024posterarXiv:2410.17839
10
citations

How do Transformers Perform In-Context Autoregressive Learning ?

Michael Sander, Raja Giryes, Taiji Suzuki et al.

ICML 2024poster

Learning High-Frequency Functions Made Easy with Sinusoidal Positional Encoding

Chuanhao Sun, Zhihang Yuan, Kai Xu et al.

ICML 2024poster

Mol-AE: Auto-Encoder Based Molecular Representation Learning With 3D Cloze Test Objective

Junwei Yang, Kangjie Zheng, Siyu Long et al.

ICML 2024poster

OAT: Object-Level Attention Transformer for Gaze Scanpath Prediction

Yini Fang, Jingling Yu, Haozheng Zhang et al.

ECCV 2024posterarXiv:2407.13335
2
citations

Recurrent Distance Filtering for Graph Representation Learning

Yuhui Ding, Antonio Orvieto, Bobby He et al.

ICML 2024poster

Subgraphormer: Unifying Subgraph GNNs and Graph Transformers via Graph Products

Guy Bar Shalom, Beatrice Bevilacqua, Haggai Maron

ICML 2024poster

Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation

Zhenyu He, Guhao Feng, Shengjie Luo et al.

ICML 2024poster

What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding

Hongkang Li, Meng Wang, Tengfei Ma et al.

ICML 2024poster