Snowglobe can simulate multi-turn conversations for agents that use tools. When you run an agent simulation, Snowglobe plans conversations with the goal of triggering realistic tool calls across your agent’s surface area, including edge cases and error scenarios. This is useful for:Documentation Index
Fetch the complete documentation index at: https://snowglobe.so/docs/llms.txt
Use this file to discover all available pages before exploring further.
- Exploring your agent’s surface area: see how it performs across normal cases and edge cases for every tool
- Regression testing: catch regressions in agent behavior and multi-turn tool use over time
- Improving your agent: generate fresh data for prompt optimization, fine-tuning, or regular development iteration
Agent simulation is currently in beta. To get access, email us at admin@guardrailsai.com and we’ll enable it for your account.
How it works at a high level
The flow for an agent simulation:- Register your tools with Snowglobe and instrument your code
- Start the
snowglobe-connectprocess locally - Launch a simulation from the Snowglobe UI
- Snowglobe orchestrates persona-driven conversations with your agent
- When your agent calls a tool, Snowglobe can dynamically mock the response. No production data is touched
- Conversations and results appear as the simulation runs
Prerequisites
Before setting up agent simulation, you need:- A Snowglobe account: sign up or book a demo if you don’t have one
- Access to your agent’s source code: you’ll need to add a few lines of instrumentation
- A chatbot description: this tells Snowglobe what your agent does so it can generate relevant scenarios. See how to write a good chatbot description for guidance
- Snowglobe Connect SDK v0.6.0+: agent simulation requires the SDK. See the Snowglobe Connect overview if you haven’t set it up yet
Roadmap
Agent simulation is in active development. What’s coming:| Feature | Target | Details |
|---|---|---|
| Persona reuse | April 2026 | The persona library already exists in Snowglobe to save personas from previous simulations, but agent simulation will get a dedicated flow for copying personas into tool-based runs. |
| Semantic steering layer | April 2026 | Use a prompt to steer simulation objectives in both probe and distribution matching modes. Similar to simulation intent, but tailored for tool workflows. |
| Enhanced error messaging | April 2026 | More detailed error messages and debugging tools for common setup and runtime issues. |
| Automatic insights | May 2026 | Automated analysis of simulation results to surface insights about tool usage patterns, edge cases, and potential improvements. This already exists in Snowglobe for non-tool simulations. |
| New onboarding flow | June 2026 | A new tool management UI, Claude Code / Codex / Cursor end-to-end support for setup and instrumentation, less manual intake, and automated tuning for historical data in both modes. |
Concepts
Understand simulation modes and how tool mocking works under the hood.
Getting started
Set up your agent for agent simulation step by step.