Documentation Index
Fetch the complete documentation index at: https://snowglobe.so/docs/llms.txt
Use this file to discover all available pages before exploring further.
Synchronous Connection Template
The synchronous connection template provides a straightforward way to connect your agent to Snowglobe for testing. This template is ideal when your application uses standard synchronous API calls and doesn’t require complex async handling.
When to Use
Use the synchronous template when:
- Your agent uses synchronous API calls (like standard OpenAI client)
- You don’t need complex async operations
- You want a simple, straightforward implementation
- Your application has moderate performance requirements
Template Code
When you run snowglobe-connect init and select the synchronous template, Snowglobe generates this code:
from snowglobe.client import CompletionRequest, CompletionFunctionOutputs
from openai import OpenAI
import os
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def completion(request: CompletionRequest) -> CompletionFunctionOutputs:
"""
Process a scenario request from Snowglobe.
This function is called by the Snowglobe client to process test requests. It should return a
CompletionFunctionOutputs object with the response content.
Args:
request (CompletionRequest): The request object containing messages for the test.
Returns:
CompletionFunctionOutputs: The response object with the generated content.
"""
# Process the request using the messages. Example using OpenAI:
messages = request.to_openai_messages(system_prompt="You are a helpful assistant.")
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
return CompletionFunctionOutputs(response=response.choices[0].message.content)
Code Walkthrough
1. Imports and Setup
from snowglobe.client import CompletionRequest, CompletionFunctionOutputs
from openai import OpenAI
import os
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
- CompletionRequest: Contains the messages and context from Snowglobe test scenarios
- CompletionFunctionOutputs: The response format expected by Snowglobe
- OpenAI client: Standard synchronous OpenAI client for making API calls
- Environment variable: Safely loads your OpenAI API key from environment
2. Main Function
def completion(request: CompletionRequest) -> CompletionFunctionOutputs:
The completion function is the entry point that Snowglobe calls for each test scenario. It must:
- Accept a
CompletionRequest parameter
- Return a
CompletionFunctionOutputs object
- Be named exactly
completion (synchronous) or acompletion (asynchronous)
3. Message Processing
messages = request.to_openai_messages(system_prompt="You are a helpful assistant.")
The to_openai_messages() method converts Snowglobe’s message format to OpenAI’s expected format. You can:
- Add a system prompt to guide your agent’s behavior
- Access individual messages with
request.messages
- Extract conversation metadata with
request.get_conversation_id()
4. API Call
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
This makes a standard synchronous call to OpenAI’s API. You can:
- Choose any OpenAI model that fits your needs
- Add additional parameters like temperature, max_tokens, etc.
- Replace with your preferred LLM provider
return CompletionFunctionOutputs(response=response.choices[0].message.content)
Snowglobe expects responses in a specific format. The CompletionFunctionOutputs object wraps your agent’s response text.
Customization Examples
Adding Custom System Prompts
def completion(request: CompletionRequest) -> CompletionFunctionOutputs:
# Extract conversation context
conversation_id = request.get_conversation_id()
# Custom system prompt based on scenario
system_prompt = "You are a customer service representative. Be helpful and professional."
messages = request.to_openai_messages(system_prompt=system_prompt)
# ... rest of implementation
Using Different LLM Providers
from anthropic import Anthropic
import os
client = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
def completion(request: CompletionRequest) -> CompletionFunctionOutputs:
# Convert to Anthropic format
messages = []
for msg in request.messages:
messages.append({
"role": msg.role,
"content": msg.content
})
response = client.messages.create(
model="claude-3-haiku-20240307",
max_tokens=1000,
messages=messages
)
return CompletionFunctionOutputs(response=response.content[0].text)
Error Handling
def completion(request: CompletionRequest) -> CompletionFunctionOutputs:
try:
messages = request.to_openai_messages(system_prompt="You are a helpful assistant.")
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
return CompletionFunctionOutputs(response=response.choices[0].message.content)
except Exception as e:
# Return error message to Snowglobe for analysis
return CompletionFunctionOutputs(
response=f"Error processing request: {str(e)}"
)
Testing Your Implementation
-
Test the connection:
-
Start the client:
-
Run scenarios: Visit your Snowglobe dashboard to execute test scenarios
The synchronous template:
- ✅ Simple and straightforward to implement
- ✅ Good for moderate traffic scenarios
- ✅ Easy to debug and troubleshoot
- ⚠️ May have higher latency under heavy load
- ⚠️ Limited concurrent request handling
For high-performance applications, consider the asynchronous connection template instead.
Next Steps