Guard wrappers provide a simple way to add Guardrails to your LLM API calls. The wrappers are designed to be used with any LLM API.
There are three ways to use Guardrails with an LLM API:
- Natively-supported LLMs: Guardrails provides out-of-the-box wrappers for OpenAI, Cohere, Anthropic and HuggingFace. If you’re using any of these APIs, check out the documentation in the Using supported LLMs section.
- LLMs supported through LiteLLM: Guardrails provides an easy integration with liteLLM, a lightweight abstraction over LLM APIs that supports over 100+ LLMs. If you’re using an LLM that isn’t natively supported by Guardrails, you can use LiteLLM to integrate it with Guardrails.
- Build a custom LLM wrapper: If you’re using an LLM that isn’t natively supported by Guardrails and you don’t want to use LiteLLM, you can build a custom LLM API wrapper.
Natively-supported LLMs
Guardrails provides native support for a select few LLMs. If you’re using any of these LLMs, you can use Guardrails’ out-of-the-box wrappers to add Guardrails to your LLM API calls.- OpenAI
- Cohere
LLMs supported via LiteLLM
LiteLLM is a lightweight wrapper that unifies the interface for over 100+ LLMs. Guardrails only supports 4 LLMs natively, but you can use Guardrails with LiteLLM to support over 100+ LLMs. You can read more about the LLMs supported by LiteLLM here. In order to use Guardrails with any of the LLMs supported through liteLLM, you need to do the following:- Call the
Guard.__call__method withlitellm.completionas the first argument. - Pass any additional litellm arguments as keyword arguments to the
Guard.__call__method.