"state space models" Papers

34 papers found

2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification

Jingwei Zhang, Anh Tien Nguyen, Xi Han et al.

CVPR 2025posterarXiv:2412.00678
20
citations

A Structure-aware and Motion-adaptive Framework for 3D Human Pose Estimation with Mamba

Ye Lu, Jie Wang, Jianjun Gao et al.

ICCV 2025posterarXiv:2507.19852
2
citations

Autocorrelation Matters: Understanding the Role of Initialization Schemes for State Space Models

Fusheng Liu, Qianxiao Li

ICLR 2025oralarXiv:2411.19455
6
citations

Deep Space Weather Model: Long-Range Solar Flare Prediction from Multi-Wavelength Images

Shunya Nagashima, Komei Sugiura

ICCV 2025posterarXiv:2508.07847
1
citations

Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient

Wenlong Wang, Ivana Dusparic, Yucheng Shi et al.

ICLR 2025posterarXiv:2410.08893
3
citations

EDELINE: Enhancing Memory in Diffusion-based World Models via Linear-Time Sequence Modeling

Jia-Hua Lee, Bor-Jiun Lin, Wei-Fang Sun et al.

NeurIPS 2025spotlightarXiv:2502.00466
2
citations

EventMG: Efficient Multilevel Mamba-Graph Learning for Spatiotemporal Event Representation

Sheng Wu, Lin Jin, Hui Feng et al.

NeurIPS 2025oral

FlowRAM: Grounding Flow Matching Policy with Region-Aware Mamba Framework for Robotic Manipulation

Sen Wang, Le Wang, Sanping Zhou et al.

CVPR 2025posterarXiv:2506.16201
8
citations

High-Resolution Spatiotemporal Modeling with Global-Local State Space Models for Video-Based Human Pose Estimation

Runyang Feng, Hyung Jin Chang, Tze Ho Elden Tse et al.

ICCV 2025posterarXiv:2510.11017

Hymba: A Hybrid-head Architecture for Small Language Models

Xin Dong, Yonggan Fu, Shizhe Diao et al.

ICLR 2025posterarXiv:2411.13676
55
citations

Mamba-Adaptor: State Space Model Adaptor for Visual Recognition

Fei Xie, Jiahao Nie, Yujin Tang et al.

CVPR 2025posterarXiv:2505.12685
1
citations

MambaIRv2: Attentive State Space Restoration

Hang Guo, Yong Guo, Yaohua Zha et al.

CVPR 2025posterarXiv:2411.15269
82
citations

Mesh Mamba: A Unified State Space Model for Saliency Prediction in Non-Textured and Textured Meshes

Kaiwei Zhang, Dandan Zhu, Xiongkuo Min et al.

CVPR 2025posterarXiv:2504.01466
2
citations

MeshMamba: State Space Models for Articulated 3D Mesh Generation and Reconstruction

Yusuke Yoshiyasu, Leyuan Sun, Ryusuke Sagawa

ICCV 2025posterarXiv:2507.15212
1
citations

OuroMamba: A Data-Free Quantization Framework for Vision Mamba

Akshat Ramachandran, Mingyu Lee, Huan Xu et al.

ICCV 2025posterarXiv:2503.10959
4
citations

PASS: Path-selective State Space Model for Event-based Recognition

Jiazhou Zhou, Kanghao Chen, Lei Zhang et al.

NeurIPS 2025oralarXiv:2409.16953
1
citations

Revisiting Convolution Architecture in the Realm of DNA Foundation Models

Yu Bo, Weian Mao, Daniel Shao et al.

ICLR 2025posterarXiv:2502.18538
4
citations

RFMamba: Frequency-Aware State Space Model for RF-Based Human-Centric Perception

Rui Zhang, Ruixu Geng, Yadong Li et al.

ICLR 2025poster
2
citations

SCSegamba: Lightweight Structure-Aware Vision Mamba for Crack Segmentation in Structures

Hui Liu, Chen Jia, Fan Shi et al.

CVPR 2025posterarXiv:2503.01113
24
citations

SegMAN: Omni-scale Context Modeling with State Space Models and Local Attention for Semantic Segmentation

Yunxiang Fu, Meng Lou, Yizhou Yu

CVPR 2025posterarXiv:2412.11890
22
citations

Spectral State Space Model for Rotation-Invariant Visual Representation Learning

Sahar Dastani, Ali Bahri, Moslem Yazdanpanah et al.

CVPR 2025posterarXiv:2503.06369
3
citations

Sports-Traj: A Unified Trajectory Generation Model for Multi-Agent Movement in Sports

Yi Xu, Yun Fu

ICLR 2025oralarXiv:2405.17680
10
citations

State Space Models are Provably Comparable to Transformers in Dynamic Token Selection

Naoki Nishikawa, Taiji Suzuki

ICLR 2025posterarXiv:2405.19036
6
citations

ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels

Benjamin Spector, Simran Arora, Aaryan Singhal et al.

ICLR 2025poster
3
citations

VSSD: Vision Mamba with Non-Causal State Space Duality

Yuheng Shi, Mingjia Li, Minjing Dong et al.

ICCV 2025posterarXiv:2407.18559
24
citations

Zebra-Llama: Towards Extremely Efficient Hybrid Models

Mingyu Yang, Mehdi Rezagholizadeh, Guihong Li et al.

NeurIPS 2025posterarXiv:2505.17272
6
citations

From Generalization Analysis to Optimization Designs for State Space Models

Fusheng Liu, Qianxiao Li

ICML 2024oral

Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling

Raunaq Bhirangi, Chenyu Wang, Venkatesh Pattabiraman et al.

ICML 2024oral

Motion Mamba: Efficient and Long Sequence Motion Generation

Zeyu Zhang, Akide Liu, Ian Reid et al.

ECCV 2024posterarXiv:2403.07487
108
citations

Probabilistic Time Series Modeling with Decomposable Denoising Diffusion Model

Tijin Yan, Hengheng Gong, Yongping He et al.

ICML 2024poster

Repeat After Me: Transformers are Better than State Space Models at Copying

Samy Jelassi, David Brandfonbrener, Sham Kakade et al.

ICML 2024poster

Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences

Zicheng Liu, Siyuan Li, Li Wang et al.

ICML 2024poster

VideoMamba: State Space Model for Efficient Video Understanding

Kunchang Li, Xinhao Li, Yi Wang et al.

ECCV 2024posterarXiv:2403.06977
401
citations

Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

Lianghui Zhu, Bencheng Liao, Qian Zhang et al.

ICML 2024poster