2025 "motion generation" Papers

14 papers found

Autoregressive Motion Generation with Gaussian Mixture-Guided Latent Sampling

Linnan Tu, Lingwei Meng, Zongyi Li et al.

NEURIPS 2025poster

Deep Compositional Phase Diffusion for Long Motion Sequence Generation

Ho Yin Au, Jie Chen, Junkun Jiang et al.

NEURIPS 2025oralarXiv:2510.14427
1
citations

DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models

Ziyi Wu, Anil Kag, Ivan Skorokhodov et al.

NEURIPS 2025oralarXiv:2506.03517
11
citations

Direct Post-Training Preference Alignment for Multi-Agent Motion Generation Model Using Implicit Feedback from Pre-training Demonstrations

Thomas Tian, Kratarth Goel

ICLR 2025posterarXiv:2503.20105
4
citations

EgoLM: Multi-Modal Language Model of Egocentric Motions

Fangzhou Hong, Vladimir Guzov, Hyo Jin Kim et al.

CVPR 2025posterarXiv:2409.18127
12
citations

Guiding Human-Object Interactions with Rich Geometry and Relations

Mengqing Xue, Yifei Liu, Ling Guo et al.

CVPR 2025posterarXiv:2503.20172
6
citations

HUMOTO: A 4D Dataset of Mocap Human Object Interactions

Jiaxin Lu, Chun-Hao Huang, Uttaran Bhattacharya et al.

ICCV 2025posterarXiv:2504.10414
6
citations

MEgoHand: Multimodal Egocentric Hand-Object Interaction Motion Generation

Bohan Zhou, Yi Zhan, Zhongbin Zhang et al.

NEURIPS 2025oralarXiv:2505.16602
3
citations

MoMaps: Semantics-Aware Scene Motion Generation with Motion Maps

Jiahui Lei, Kyle Genova, George Kopanas et al.

ICCV 2025posterarXiv:2510.11107
1
citations

PINO: Person-Interaction Noise Optimization for Long-Duration and Customizable Motion Generation of Arbitrary-Sized Groups

Sakuya Ota, Qing Yu, Kent Fujiwara et al.

ICCV 2025posterarXiv:2507.19292
1
citations

SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters

Jianping Jiang, Weiye Xiao, Zhengyu Lin et al.

CVPR 2025posterarXiv:2412.00174
11
citations

SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction Scenarios

Lingwei Dang, Ruizhi Shao, Hongwen Zhang et al.

NEURIPS 2025spotlightarXiv:2506.02444
3
citations

Think Then React: Towards Unconstrained Action-to-Reaction Motion Generation

Wenhui Tan, Boyuan Li, Chuhao Jin et al.

ICLR 2025poster
9
citations

UniEgoMotion: A Unified Model for Egocentric Motion Reconstruction, Forecasting, and Generation

Chaitanya Patel, Hiroki Nakamura, Yuta Kyuragi et al.

ICCV 2025posterarXiv:2508.01126
4
citations