"positional encoding" Papers
19 papers found
Continuity and Isolation Lead to Doubts or Dilemmas in Large Language Models
Hector Pasten, Felipe Urrutia, Hector Orellana et al.
LEDiT: Your Length-Extrapolatable Diffusion Transformer without Positional Encoding
Shen Zhang, Siyuan Liang, Yaning Tan et al.
Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability
Yarden Bakish, Itamar Zimerman, Hila Chefer et al.
Spectral Graph Neural Networks are Incomplete on Graphs with a Simple Spectrum
Snir Hordan, Maya Bechler-Speicher, Gur Lifshitz et al.
ToF-IP: Time-of-Flight Enhanced Sparse Inertial Poser for Real-time Human Motion Capture
Yuan Yao, Shifan Jiang, Yangqing Hou et al.
Vocabulary In-Context Learning in Transformers: Benefits of Positional Encoding
Qian Ma, Ruoxiang Xu, Yongqiang Cai
Why RoPE Struggles to Maintain Long-Term Decay in Long Sequences?
Wei Shen, Chao Yin, Yuliang Liu et al.
Few-shot NeRF by Adaptive Rendering Loss Regularization
Qingshan Xu, Xuanyu Yi, Jianyao Xu et al.
How do Transformers Perform In-Context Autoregressive Learning ?
Michael Sander, Raja Giryes, Taiji Suzuki et al.
Learning High-Frequency Functions Made Easy with Sinusoidal Positional Encoding
Chuanhao Sun, Zhihang Yuan, Kai Xu et al.
Mitigating Perspective Distortion-induced Shape Ambiguity in Image Crops
Aditya Prakash, Arjun Gupta, Saurabh Gupta
Mol-AE: Auto-Encoder Based Molecular Representation Learning With 3D Cloze Test Objective
Junwei Yang, Kangjie Zheng, Siyu Long et al.
OAT: Object-Level Attention Transformer for Gaze Scanpath Prediction
Yini Fang, Jingling Yu, Haozheng Zhang et al.
Occlusion Handling in 3D Human Pose Estimation with Perturbed Positional Encoding
niloofar azizi, Mohsen Fayyaz, Horst Bischof
Recurrent Distance Filtering for Graph Representation Learning
Yuhui Ding, Antonio Orvieto, Bobby He et al.
Spline-based Transformers
Prashanth Chandran, Agon Serifi, Markus Gross et al.
Subgraphormer: Unifying Subgraph GNNs and Graph Transformers via Graph Products
Guy Bar Shalom, Beatrice Bevilacqua, Haggai Maron
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation
Zhenyu He, Guhao Feng, Shengjie Luo et al.
What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Hongkang Li, Meng Wang, Tengfei Ma et al.