2025 Poster "sequence modeling" Papers
16 papers found
BlockScan: Detecting Anomalies in Blockchain Transactions
Jiahao Yu, Xian Wu, Hao Liu et al.
NEURIPS 2025posterarXiv:2410.04039
3
citations
Competition Dynamics Shape Algorithmic Phases of In-Context Learning
Core Francisco Park, Ekdeep Singh Lubana, Hidenori Tanaka
ICLR 2025posterarXiv:2412.01003
34
citations
Controllable Generation via Locally Constrained Resampling
Kareem Ahmed, Kai-Wei Chang, Guy Van den Broeck
ICLR 2025posterarXiv:2410.13111
9
citations
Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient
Wenlong Wang, Ivana Dusparic, Yucheng Shi et al.
ICLR 2025posterarXiv:2410.08893
3
citations
Enhancing the Maximum Effective Window for Long-Term Time Series Forecasting
Jiahui Zhang, Zhengyang Zhou, Wenjie Du et al.
NEURIPS 2025poster
Evolutionary Reasoning Does Not Arise in Standard Usage of Protein Language Models
Yasha Ektefaie, Andrew Shen, Lavik Jain et al.
NEURIPS 2025poster
Learning Video-Conditioned Policy on Unlabelled Data with Joint Embedding Predictive Transformer
Hao Luo, Zongqing Lu
ICLR 2025poster
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory
Nikola Zubic, Federico Soldà, Aurelio Sulser et al.
ICLR 2025posterarXiv:2405.16674
17
citations
Neural Attention Search
Difan Deng, Marius Lindauer
NEURIPS 2025posterarXiv:2502.13251
Parallel Sequence Modeling via Generalized Spatial Propagation Network
Hongjun Wang, Wonmin Byeon, Jiarui Xu et al.
CVPR 2025posterarXiv:2501.12381
3
citations
Plug, Play, and Generalize: Length Extrapolation with Pointer-Augmented Neural Memory
Svetha Venkatesh, Kien Do, Hung Le et al.
ICLR 2025poster
Scaling Up Liquid-Resistance Liquid-Capacitance Networks for Efficient Sequence Modeling
Mónika Farsang, Radu Grosu
NEURIPS 2025posterarXiv:2505.21717
4
citations
SeerAttention: Self-distilled Attention Gating for Efficient Long-context Prefilling
Yizhao Gao, Zhichen Zeng, DaYou Du et al.
NEURIPS 2025poster
Selective induction Heads: How Transformers Select Causal Structures in Context
Francesco D'Angelo, francesco croce, Nicolas Flammarion
ICLR 2025posterarXiv:2509.08184
4
citations
State Space Models are Provably Comparable to Transformers in Dynamic Token Selection
Naoki Nishikawa, Taiji Suzuki
ICLR 2025posterarXiv:2405.19036
6
citations
Unsupervised Meta-Learning via In-Context Learning
Anna Vettoruzzo, Lorenzo Braccaioli, Joaquin Vanschoren et al.
ICLR 2025posterarXiv:2405.16124
3
citations