Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning
Abstract
Large language model agents trained in synthetic environments with code-driven simulations and database-backed state transitions demonstrate superior out-of-distribution generalization compared to traditional benchmark-specific approaches.
Recent advances in large language model (LLM) have empowered autonomous agents to perform complex tasks that require multi-turn interactions with tools and environments. However, scaling such agent training is limited by the lack of diverse and reliable environments. In this paper, we propose Agent World Model (AWM), a fully synthetic environment generation pipeline. Using this pipeline, we scale to 1,000 environments covering everyday scenarios, in which agents can interact with rich toolsets (35 tools per environment on average) and obtain high-quality observations. Notably, these environments are code-driven and backed by databases, providing more reliable and consistent state transitions than environments simulated by LLMs. Moreover, they enable more efficient agent interaction compared with collecting trajectories from realistic environments. To demonstrate the effectiveness of this resource, we perform large-scale reinforcement learning for multi-turn tool-use agents. Thanks to the fully executable environments and accessible database states, we can also design reliable reward functions. Experiments on three benchmarks show that training exclusively in synthetic environments, rather than benchmark-specific ones, yields strong out-of-distribution generalization. The code is available at https://github.com/Snowflake-Labs/agent-world-model.
Community
Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning
๐ Introducing Agent World Model (AWM) โ we synthesized 1,000 code-driven environments with 35K tools and 10K tasks for large-scale agentic reinforcement learning!
No real APIs. No human design. Just 100 seed names โ fully functional, database-backed agent environments exposed via MCP interface.
Agents trained purely on synthetic envs generalize to out-of-distribution benchmarks. Code, Environments, & Models all open-sourced. ๐ฅ
We train Qwen3 (4B/8B/14B) with online RL using GRPO algorithm at serious scale:
โก 1,024 parallel env instances per training step
๐ฏ Hybrid reward: step-level format checks + task-level outcome verification
๐ง History-aware training: align sliding-window truncation between training & inference
Key insight: code-driven environments give more stable learning signals than LLM-simulated ones, and they're orders of magnitude faster.
Results on 3 out-of-distribution benchmarks (AWM does NOT target any benchmark specific ones):
๐ BFCLv3: 8B jumps 53.83 โ 65.94 (+12.11)
๐ ฯยฒ-bench: competitive, 14B reaches 39.03 Pass@1
๐ MCP-Universe: best overall, 8B: 6.70 โ 11.17
๐ AWM is the ONLY method that improves over Base on ALL three benchmarks.
๐ Paper: https://arxiv.org/abs/2602.10090
๐ป Code: https://github.com/Snowflake-Labs/agent-world-model
๐ค Huggingface: https://huggingface.co/datasets/Snowflake/AgentWorldModel-1K
Models citing this paper 3
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper