Skip to main content
Guardrails has support for 100+ LLMs through its integration with LiteLLM. This integration is really useful because it allows the Guardrails call API to use the same clean interface that LiteLLM and OpenAI use. This means that you can use similar code to make LLM requests with Guardrails as you would with OpenAI. To interact with a model, set the desired LLM API KEY such as the OPENAI_API_KEY and the desired model with the model property.

Supported LLM providers

Guardrails supports the following LLM providers:
  • OpenAI - GPT-4, GPT-3.5-turbo, and other OpenAI models
  • Anthropic - Claude models
  • Azure OpenAI - Azure-hosted OpenAI models
  • Google Gemini - Gemini Pro and other Google models
  • 100+ more - Through LiteLLM integration

Basic usage pattern

All LLM integrations follow a similar pattern:
from guardrails import Guard

# Create a Guard
guard = Guard()

# Call the LLM with validation
result = guard(
    messages=[{"role": "user", "content": "Your prompt here"}],
    model="model-name",
)

print(result.validated_output)

Next steps

Explore the specific tutorials for each LLM provider: