Documentation Index
Fetch the complete documentation index at: https://snowglobe.so/docs/llms.txt
Use this file to discover all available pages before exploring further.
Snowglobe SDK Integration Guide
These examples are for
snowglobe<=0.4.x. See the documentation for the latest version
here.
Overview
Snowglobe is a simulation engine designed for testing and evaluating AI agents and chatbots through automated conversation generation and analysis. This guide demonstrates how to integrate Snowglobe into your continuous integration (CI) pipeline and programmatic workflows for comprehensive agent testing.
Table of Contents
Prerequisites
Before integrating Snowglobe, ensure you have:
- Python 3.10+ installed
- Snowglobe SDK package (
pip install snowglobe-sdk)
- Valid API credentials (API key and Organization ID) from here.
- OpenAI API key (or other supported LLM provider credentials)
- Access to a Snowglobe control plane instance
Required Dependencies
from snowglobe.sdk import Client
from snowglobe.sdk.api.default import (
get_api_simulations_id,
get_api_simulations_id_download_data,
post_api_agents,
post_api_simulations,
put_api_simulations_id_settings
)
from snowglobe.sdk.models import (
AgentCreateSchema,
Agent,
ValidationError,
SimulationCreateSchema,
SimulationSettingsUpdateSchema
)
Authentication Setup
Environment Variables
Set up your credentials as environment variables or configuration constants:
X_API_KEY = "eyJhbM..." # Your Snowglobe API key
X_SNOWGLOBE_ORG_ID = "orZu..." # Your organization ID
OPENAI_API_KEY = "sk-prnZS..." # Your LLM provider API key
Client Configuration
Initialize the Snowglobe client with proper authentication headers:
control_plane_url = "https://api.simlab.guardrailsai.com" # Or your hosted instance
auth_header = {
"x-api-key": X_API_KEY,
"x-snowglobe-org-id": X_SNOWGLOBE_ORG_ID
}
client = Client(
base_url=control_plane_url,
headers=auth_header,
follow_redirects=True
)
Core Components
1. Agent Creation
Agents represent the AI systems you want to test. Each agent requires:
- Name and Description: Identifiers for your agent
- Icon: Visual representation (optional)
- Connection Info: LLM provider configuration
- System Prompt: Instructions defining the agent’s behavior
agent_schema = AgentCreateSchema(
name="customer_support_agent",
description="Customer support agent for Amatto's pizza restaurant",
icon="pizza",
connection_info={
"endpoint": "",
"provider": "OpenAI",
"extra_body": [
],
"model_name": "openai/gpt-4o",
"api_key_ref": OPENAI_API_KEY,
"extra_headers": [
],
"system_prompt": "Your are a helpful expert customer support agent for Amatto's pizza"
}
)
Note:
Agents utilizing a code integration via snowglobe-connect require a two-step setup:
- Create the agent using the Snowglobe API as shown above.
- Configure the agent in your
snowglobe-connect deployment by mapping the agent’s ID and settings in the agents.json file.
- Run
snowglobe-connect start.
This ensures Snowglobe can route simulation traffic to your custom integration correctly.
2. Simulation Configuration
Simulations define how conversations will be generated and evaluated:
simulation_config = {
"name": "continuous integration simulation",
"role": "Customer support agent for Amatto's pizza restaurant",
"user_description": "",
"use_cases": "",
"generation_status": "pending",
"evaluation_status": "pending",
"validation_status": "pending",
"source_data": {
"docs": {
"misc": [],
"knowledge_base": [],
"historical_data": []
},
"evaluation_configuration": {
"No Financial Advice": {
"id": "e5af8dee-6d8d-4144-b754-204d24879ec9",
"name": "No Financial Advice",
"version": 1,
"metadata": {},
},
},
"generation_configuration": {
"max_topics": 1,
"max_personas": 4,
"branching_factor": 25,
"max_conversations": 500,
"max_conversation_length": 4,
"continue_conversations_from_adapted_messages": False,
"data_gen_mode": "coverage_focused_v3",
"intent": "",
"persona_topic_generators": [
{
"name": "app_description_system_prompt",
"settings": {
"max_personas": 4
}
}
],
"min_conversation_length": 1
}
},
"is_template": False
}
CI Integration Workflow
def create_agent(client, agent_schema):
"""Create a new agent for testing."""
response = post_api_agents.sync_detailed(
body=agent_schema,
client=client
)
if response.parsed and isinstance(response.parsed, Agent):
print(f"Agent created successfully: {response.parsed.id}")
return str(response.parsed.id)
else:
raise Exception("Failed to create agent")
Step 2: Launch Simulation
def create_simulation(client, agent_id, simulation_config):
"""Create and launch a new simulation."""
simulation_config.update({
"application_id": agent_id,
"app_id": agent_id
})
response = post_api_simulations.sync_detailed(
body=SimulationCreateSchema(**simulation_config),
client=client
)
if response.parsed:
simulation_id = str(response.parsed.id)
# Auto-approve personas for CI automation
update_settings(client, simulation_id)
return simulation_id
else:
raise Exception("Failed to create simulation")
def update_settings(client, simulation_id):
"""Enable auto-approval of personas for automated testing."""
update_body = SimulationSettingsUpdateSchema(auto_approve_personas=True)
put_api_simulations_id_settings.sync_detailed(
id=simulation_id,
body=update_body,
client=client
)
Step 3: Monitor Simulation Progress
def wait_for_completion(client, simulation_id, timeout_minutes=20):
"""Poll simulation until completion or timeout."""
max_attempts = timeout_minutes * 6 # Poll every 10 seconds
poll_interval = 10
for attempt in range(max_attempts):
response = get_api_simulations_id.sync_detailed(
id=simulation_id,
client=client
)
if response.parsed and hasattr(response.parsed, "state_num"):
current_state = response.parsed.state_num
print(f"Simulation state: {current_state}")
# State 17+ indicates completion
if current_state >= 17:
print("Simulation completed successfully")
return True
time.sleep(poll_interval)
raise TimeoutError("Simulation timed out before completion")
Step 4: Retrieve Results
def download_results(client, simulation_id):
"""Download simulation results for analysis."""
response = get_api_simulations_id_download_data.sync_detailed(
id=simulation_id,
client=client
)
if response.status_code == 200 and response.parsed:
# Save results to file
filename = f"{simulation_id}_results.json"
rows = []
# loop over each row turn them into dict and write to file
for row in response.parsed:
row_dict = row.to_dict()
rows.append(row_dict)
with open(filename, "w") as f:
f.write(json.dumps(rows, indent=2))
print(f"Results saved to {filename}")
return filename
else:
raise Exception(f"Failed to download results: {response.status_code}")
Complete CI Integration Example
def run_agent_simulation(agent_config, simulation_config):
"""Complete workflow for running agent simulations in CI."""
try:
# Initialize client
client = Client(
base_url=control_plane_url,
headers=auth_header,
follow_redirects=True
)
# Create agent
agent_id = create_agent(client, agent_schema)
# Launch simulation
simulation_id = create_simulation(client, agent_id, simulation_config)
# Wait for completion
wait_for_completion(client, simulation_id)
# Download and return results
results_file = download_results(client, simulation_id)
return {
"success": True,
"agent_id": agent_id,
"simulation_id": simulation_id,
"results_file": results_file
}
except Exception as e:
print(f"Simulation failed: {str(e)}")
return {"success": False, "error": str(e)}
Error Handling
def robust_simulation_run(config):
"""Simulation run with comprehensive error handling."""
try:
return run_agent_simulation(config["agent"], config["simulation"])
except ValidationError as e:
print(f"Configuration validation failed: {e}")
return {"success": False, "error": "validation", "details": str(e)}
except TimeoutError as e:
print(f"Simulation timed out: {e}")
return {"success": False, "error": "timeout", "details": str(e)}
except Exception as e:
print(f"Unexpected error: {e}")
return {"success": False, "error": "unexpected", "details": str(e)}
CI Pipeline Integration
# Example GitHub Actions workflow
name: Agent Testing
on: [push, pull_request]
jobs:
test-agent:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: pip install snowglobe-sdk
- name: Run agent simulation
env:
X_API_KEY: ${{ secrets.SNOWGLOBE_API_KEY }}
X_SNOWGLOBE_ORG_ID: ${{ secrets.SNOWGLOBE_ORG_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: python test_agent_simulation.py
Simulation States
Understanding simulation states helps with monitoring:
- States 0-5: Initialization and setup
- States 6-10: Persona and topic generation
- States 11-16: Conversation generation and agent testing
- State 17+: Evaluation complete, results available
Troubleshooting
Common Issues
Authentication Errors
- Verify API key and organization ID are correct
- Ensure headers are properly formatted
- Check network connectivity to control plane
Simulation Failures
- Review agent configuration for missing required fields
- Verify LLM provider API key is valid and has sufficient quota
- Check simulation parameters are within acceptable ranges
Timeout Issues
- Increase timeout duration for complex simulations
- Reduce persona count or length for faster completion