"state space models" Papers
14 papers found
2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification
Jingwei Zhang, Anh Tien Nguyen, Xi Han et al.
CVPR 2025posterarXiv:2412.00678
20
citations
EventMG: Efficient Multilevel Mamba-Graph Learning for Spatiotemporal Event Representation
Sheng Wu, Lin Jin, Hui Feng et al.
NeurIPS 2025oral
Hymba: A Hybrid-head Architecture for Small Language Models
Xin Dong, Yonggan Fu, Shizhe Diao et al.
ICLR 2025posterarXiv:2411.13676
55
citations
Revisiting Convolution Architecture in the Realm of DNA Foundation Models
Yu Bo, Weian Mao, Daniel Shao et al.
ICLR 2025posterarXiv:2502.18538
4
citations
SCSegamba: Lightweight Structure-Aware Vision Mamba for Crack Segmentation in Structures
Hui Liu, Chen Jia, Fan Shi et al.
CVPR 2025posterarXiv:2503.01113
24
citations
SegMAN: Omni-scale Context Modeling with State Space Models and Local Attention for Semantic Segmentation
Yunxiang Fu, Meng Lou, Yizhou Yu
CVPR 2025posterarXiv:2412.11890
22
citations
VSSD: Vision Mamba with Non-Causal State Space Duality
Yuheng Shi, Mingjia Li, Minjing Dong et al.
ICCV 2025posterarXiv:2407.18559
24
citations
From Generalization Analysis to Optimization Designs for State Space Models
Fusheng Liu, Qianxiao Li
ICML 2024oral
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling
Raunaq Bhirangi, Chenyu Wang, Venkatesh Pattabiraman et al.
ICML 2024oral
Motion Mamba: Efficient and Long Sequence Motion Generation
Zeyu Zhang, Akide Liu, Ian Reid et al.
ECCV 2024posterarXiv:2403.07487
108
citations
Probabilistic Time Series Modeling with Decomposable Denoising Diffusion Model
Tijin Yan, Hengheng Gong, Yongping He et al.
ICML 2024poster
Repeat After Me: Transformers are Better than State Space Models at Copying
Samy Jelassi, David Brandfonbrener, Sham Kakade et al.
ICML 2024poster
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Zicheng Liu, Siyuan Li, Li Wang et al.
ICML 2024poster
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Lianghui Zhu, Bencheng Liao, Qian Zhang et al.
ICML 2024poster