Hierarchical World Models as Visual Whole-Body Humanoid Controllers

20
Citations
#432
in ICLR 2025
of 3827 papers
6
Authors
3
Data Points

Abstract

Whole-body control for humanoids is challenging due to the high-dimensional nature of the problem, coupled with the inherent instability of a bipedal morphology. Learning from visual observations further exacerbates this difficulty. In this work, we explore highly data-driven approaches to visual whole-body humanoid control based on reinforcement learning, without any simplifying assumptions, reward design, or skill primitives. Specifically, we propose a hierarchical world model in which a high-level agent generates commands based on visual observations for a low-level agent to execute, both of which are trained with rewards. Our approach produces highly performant control policies in 8 tasks with a simulated 56-DoF humanoid, while synthesizing motions that are broadly preferred by humans.

Citation History

Jan 25, 2026
19
Jan 27, 2026
19
Jan 31, 2026
20+1