"self-attention mechanisms" Papers
5 papers found
Context-Aware Regularization with Markovian Integration for Attention-Based Nucleotide Analysis
Mohammad Saleh Refahi, Mahdi Abavisani, Bahrad Sokhansanj et al.
NeurIPS 2025posterarXiv:2507.09378
Selective induction Heads: How Transformers Select Causal Structures in Context
Francesco D'Angelo, francesco croce, Nicolas Flammarion
ICLR 2025posterarXiv:2509.08184
4
citations
SimpleTM: A Simple Baseline for Multivariate Time Series Forecasting
Hui Chen, Viet Luong, Lopamudra Mukherjee et al.
ICLR 2025oral
Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion
Ishaan Rawal, Alexander Matyasko, Shantanu Jaiswal et al.
ICML 2024poster
PolySketchFormer: Fast Transformers via Sketching Polynomial Kernels
Praneeth Kacham, Vahab Mirrokni, Peilin Zhong
ICML 2024poster