Learn how to use Guardrails with Azure OpenAI Service.
Prerequisites
Set your Azure OpenAI credentials:
export AZURE_API_KEY="your-azure-api-key"
export AZURE_API_BASE="https://example-endpoint.openai.azure.com"
export AZURE_API_VERSION="2023-05-15"
Or in Python:
import os
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "2023-05-15"
Basic usage
from guardrails import Guard
import os
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "2023-05-15"
guard = Guard()
result = guard(
model="azure/<<your_deployment_name>>",
messages=[{"role": "user", "content": "How many moons does Jupiter have?"}],
)
print(result.validated_output)
Replace <<your_deployment_name>> with your actual Azure OpenAI deployment name.
Streaming
Stream responses from Azure OpenAI with real-time validation:
from guardrails import Guard
import os
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "2023-05-15"
guard = Guard()
stream_chunk_generator = guard(
messages=[{"role": "user", "content": "How many moons does Jupiter have?"}],
model="azure/<<your_deployment_name>>",
stream=True
)
for chunk in stream_chunk_generator:
print(chunk.validated_output)
Use Azure OpenAI’s function calling with Guardrails for structured data:
from pydantic import BaseModel
from typing import List
from guardrails import Guard
import os
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "2023-05-15"
class Fruit(BaseModel):
name: str
color: str
class Basket(BaseModel):
fruits: List[Fruit]
guard = Guard.for_pydantic(Basket)
result = guard(
messages=[{"role": "user", "content": "Generate a basket of 5 fruits"}],
model="azure/<<your_deployment_name>>",
tools=guard.add_json_function_calling_tool([]),
tool_choice="required",
)
print(result.validated_output)
Using validators
Add validators to ensure output quality:
from guardrails import Guard
from guardrails.hub import ProfanityFree
import os
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "2023-05-15"
guard = Guard().use(ProfanityFree())
result = guard(
messages=[{"role": "user", "content": "Tell me a story"}],
model="azure/<<your_deployment_name>>",
)
print(result.validated_output)
Configuration
Azure OpenAI requires three environment variables:
AZURE_API_KEY - Your Azure OpenAI API key
AZURE_API_BASE - Your Azure OpenAI endpoint URL
AZURE_API_VERSION - The API version (e.g., “2023-05-15”)
Supported models
Guardrails supports all Azure OpenAI deployments including:
- GPT-4
- GPT-3.5-turbo
- Any custom deployments you’ve created
Error handling
Guardrails automatically handles common Azure OpenAI errors with retries and exponential backoff:
from guardrails import Guard
import os
os.environ["AZURE_API_KEY"] = "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "2023-05-15"
guard = Guard()
try:
result = guard(
messages=[{"role": "user", "content": "Your prompt"}],
model="azure/<<your_deployment_name>>",
)
except Exception as e:
print(f"Error: {e}")