NEURIPS 2025 "vision transformers" Papers

21 papers found

A Circular Argument: Does RoPE need to be Equivariant for Vision?

Chase van de Geijn, Timo Lüddecke, Polina Turishcheva et al.

NEURIPS 2025posterarXiv:2511.08368
2
citations

Alias-Free ViT: Fractional Shift Invariance via Linear Attention

Hagay Michaeli, Daniel Soudry

NEURIPS 2025posterarXiv:2510.22673

BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models

Dingqiang Ye, Chao Fan, Zhanbo Huang et al.

NEURIPS 2025posterarXiv:2505.18132
5
citations

ChA-MAEViT: Unifying Channel-Aware Masked Autoencoders and Multi-Channel Vision Transformers for Improved Cross-Channel Learning

Chau Pham, Juan C. Caicedo, Bryan Plummer

NEURIPS 2025posterarXiv:2503.19331
4
citations

Elastic ViTs from Pretrained Models without Retraining

Walter Simoncini, Michael Dorkenwald, Tijmen Blankevoort et al.

NEURIPS 2025posterarXiv:2510.17700

Energy Landscape-Aware Vision Transformers: Layerwise Dynamics and Adaptive Task-Specific Training via Hopfield States

Runze Xia, Richard Jiang

NEURIPS 2025poster

GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers

Guang Liang, Xinyao Liu, Jianxin Wu

NEURIPS 2025posterarXiv:2506.11784
4
citations

Linear Differential Vision Transformer: Learning Visual Contrasts via Pairwise Differentials

Yifan Pu, Jixuan Ying, Qixiu Li et al.

NEURIPS 2025posterarXiv:2511.00833

LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision

Anthony Fuller, Yousef Yassin, Junfeng Wen et al.

NEURIPS 2025posterarXiv:2505.18051
1
citations

Multi-Kernel Correlation-Attention Vision Transformer for Enhanced Contextual Understanding and Multi-Scale Integration

Hongkang Zhang, Shao-Lun Huang, Ercan KURUOGLU et al.

NEURIPS 2025poster

Normalize Filters! Classical Wisdom for Deep Vision

Gustavo Perez, Stella X. Yu

NEURIPS 2025posterarXiv:2506.04401

Polyline Path Masked Attention for Vision Transformer

Zhongchen Zhao, Chaodong Xiao, Hui LIN et al.

NEURIPS 2025spotlightarXiv:2506.15940

Randomized-MLP Regularization Improves Domain Adaptation and Interpretability in DINOv2

Joel Valdivia Ortega, Lorenz Lamm, Franziska Eckardt et al.

NEURIPS 2025posterarXiv:2511.05509

Register and [CLS] tokens induce a decoupling of local and global features in large ViTs

Alexander Lappe, Martin Giese

NEURIPS 2025poster

Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks

Giyeong Oh, Woohyun Cho, Siyeol Kim et al.

NEURIPS 2025posterarXiv:2505.11881

Scalable Neural Network Geometric Robustness Validation via Hölder Optimisation

Yanghao Zhang, Panagiotis Kouvaros, Alessio Lomuscio

NEURIPS 2025poster

Sinusoidal Initialization, Time for a New Start

Alberto Fernandez-Hernandez, Jose Mestre, Manuel F. Dolz et al.

NEURIPS 2025posterarXiv:2505.12909
1
citations

SonoGym: High Performance Simulation for Challenging Surgical Tasks with Robotic Ultrasound

Yunke Ao, Masoud Moghani, Mayank Mittal et al.

NEURIPS 2025posterarXiv:2507.01152
1
citations

TRUST: Test-Time Refinement using Uncertainty-Guided SSM Traverses

Sahar Dastani, Ali Bahri, Gustavo Vargas Hakim et al.

NEURIPS 2025posterarXiv:2509.22813

Vision Transformers Don't Need Trained Registers

Nicholas Jiang, Amil Dravid, Alexei Efros et al.

NEURIPS 2025spotlightarXiv:2506.08010
12
citations

Vision Transformers with Self-Distilled Registers

Zipeng Yan, Yinjie Chen, Chong Zhou et al.

NEURIPS 2025spotlightarXiv:2505.21501
4
citations