Boosting Virtual Agent Learning and Reasoning: A Step-Wise, Multi-Dimensional, and Generalist Reward Model with Benchmark

4citations
4
Citations
#344
in ICML 2025
of 3340 papers
10
Authors
1
Data Points

Abstract

The development of Generalist Virtual Agents (GVAs) has shown significant promise in autonomous task execution. However, current training paradigms face critical limitations, including reliance on outcome supervision and labor-intensive human annotations. To address these challenges, we proposeSimilar, astep-wisemulti-dimensionalgeneralistreward model, which offers fine-grained signals for agent training and can choose better actions for inference-time scaling. Specifically, we begin by systematically defining five dimensions for evaluating agent actions. Building on this framework, we design an MCTS-P algorithm to automatically collect and annotate step-wise, five-dimensional agent execution data. Using this data, we trainSimilarwith our crafted Triple-M strategy. Furthermore, we introduce the first benchmark in the virtual agent domain for step-wise, multi-dimensional reward model training and evaluation, namedSRM. This benchmark consists of two components:SRMTrain, which serves as the training set forSimilar, andSRMEval, a manually selected test set for evaluating the reward model. Experimental results demonstrate thatSimilar, through its step-wise, multi-dimensional assessment and synergistic gain, provides GVAs with effective intermediate signals during both training and inference-time scaling. The code is available athttps://github.com/antgroup/Similar.

Citation History

Jan 28, 2026
4