Skip to main content

Guard

class Guard(IGuard, Generic[OT])
The Guard class. This class is the main entry point for using Guardrails. It can be initialized by one of the following patterns:
  • Guard().use(...)
  • Guard.for_string(...)
  • Guard.for_pydantic(...)
  • Guard.for_rail(...)
  • Guard.for_rail_string(...)
The __call__ method functions as a wrapper around LLM APIs. It takes in an LLM API, and optional prompt parameters, and returns a ValidationOutcome class that contains the raw output from the LLM, the validated output, as well as other helpful information.

_init_

def __init__(*,
             id: Optional[str] = None,
             name: Optional[str] = None,
             description: Optional[str] = None,
             validators: Optional[List[ValidatorReference]] = None,
             output_schema: Optional[Dict[str, Any]] = None,
             base_url: Optional[str] = None,
             api_key: Optional[str] = None,
             history_max_length: Optional[int] = None,
             use_server: Optional[bool] = None)
Initialize the Guard with serialized validator references and an output schema. Output schema must be a valid JSON Schema.

configure

def configure(*,
              num_reasks: Optional[int] = None,
              allow_metrics_collection: Optional[bool] = None)
Configure the Guard. Arguments:
  • num_reasks int, optional - The max times to re-ask the LLM if validation fails. Defaults to None.
  • allow_metrics_collection bool, optional - Whether to allow Guardrails to collect anonymous metrics. Defaults to None, and falls back to waht is set via the guardrails configure command.

for_rail

@classmethod
def for_rail(cls,
             rail_file: str,
             *,
             name: Optional[str] = None,
             description: Optional[str] = None)
Create a Guard using a .rail file to specify the output schema, prompt, etc. Arguments:
  • rail_file - The path to the .rail file.
  • name str, optional - A unique name for this Guard. Defaults to gr- + the object id.
  • description str, optional - A description for this Guard. Defaults to None.
Returns: An instance of the Guard class.

for_rail_string

@classmethod
def for_rail_string(cls,
                    rail_string: str,
                    *,
                    name: Optional[str] = None,
                    description: Optional[str] = None)
Create a Guard using a .rail string to specify the output schema, prompt, etc.. Arguments:
  • rail_string - The .rail string.
  • name str, optional - A unique name for this Guard. Defaults to gr- + the object id.
  • description str, optional - A description for this Guard. Defaults to None.
Returns: An instance of the Guard class.

for_pydantic

@classmethod
def for_pydantic(cls,
                 output_class: ModelOrListOfModels,
                 *,
                 reask_messages: Optional[List[Dict]] = None,
                 messages: Optional[List[Dict]] = None,
                 name: Optional[str] = None,
                 description: Optional[str] = None,
                 output_formatter: Optional[Union[str, BaseFormatter]] = None)
Create a Guard instance using a Pydantic model to specify the output schema. Arguments:
  • output_class - (Union[Type[BaseModel], List[Type[BaseModel]]]): The pydantic model that describes the desired structure of the output.
  • messages List[Dict], optional - A list of messages to give to the llm. Defaults to None.
  • reask_messages List[Dict], optional - A list of messages to use during reasks. Defaults to None.
  • name str, optional - A unique name for this Guard. Defaults to gr- + the object id.
  • description str, optional - A description for this Guard. Defaults to None.
  • output_formatter str | Formatter, optional - ‘none’ (default), ‘jsonformer’, or a Guardrails Formatter.

for_string

@classmethod
def for_string(cls,
               validators: Sequence[Validator],
               *,
               string_description: Optional[str] = None,
               reask_messages: Optional[List[Dict]] = None,
               messages: Optional[List[Dict]] = None,
               name: Optional[str] = None,
               description: Optional[str] = None)
Create a Guard instance for a string response. Arguments:
  • validators - (List[Validator]): The list of validators to apply to the string output.
  • string_description str, optional - A description for the string to be generated. Defaults to None.
  • messages List[Dict], optional - A list of messages to pass to llm. Defaults to None.
  • reask_messages List[Dict], optional - A list of messages to use during reasks. Defaults to None.
  • name str, optional - A unique name for this Guard. Defaults to gr- + the object id.
  • description str, optional - A description for this Guard. Defaults to None.

_call_

def __call__(
        llm_api: Optional[Callable] = None,
        *args,
        prompt_params: Optional[Dict] = None,
        num_reasks: Optional[int] = 1,
        messages: Optional[List[Dict]] = None,
        metadata: Optional[Dict] = None,
        full_schema_reask: Optional[bool] = None,
        **kwargs
) -> Union[ValidationOutcome[OT], Iterator[ValidationOutcome[OT]]]
Call the LLM and validate the output. Arguments:
  • llm_api - The LLM API to call (e.g. openai.completions.create or openai.Completion.acreate)
  • prompt_params - The parameters to pass to the prompt.format() method.
  • num_reasks - The max times to re-ask the LLM for invalid output.
  • messages - The message history to pass to the LLM.
  • metadata - Metadata to pass to the validators.
  • full_schema_reask - When reasking, whether to regenerate the full schema or just the incorrect values. Defaults to True if a base model is provided, False otherwise.
Returns: ValidationOutcome

parse

def parse(llm_output: str,
          *args,
          metadata: Optional[Dict] = None,
          llm_api: Optional[Callable] = None,
          num_reasks: Optional[int] = None,
          prompt_params: Optional[Dict] = None,
          full_schema_reask: Optional[bool] = None,
          **kwargs) -> ValidationOutcome[OT]
Alternate flow to using Guard where the llm_output is known. Arguments:
  • llm_output - The output being parsed and validated.
  • metadata - Metadata to pass to the validators.
  • llm_api - The LLM API to call (e.g. openai.completions.create or openai.Completion.acreate)
  • num_reasks - The max times to re-ask the LLM for invalid output.
  • prompt_params - The parameters to pass to the prompt.format() method.
  • full_schema_reask - When reasking, whether to regenerate the full schema or just the incorrect values.
Returns: ValidationOutcome

error_spans_in_output

def error_spans_in_output() -> List[ErrorSpan]
Get the error spans in the last output.

use

def use(*validator_spread: Validator,
        validators: List[Validator] = [],
        on: str = "output") -> "Guard"
Applies validators to the property specified in the on argument. Calling Guard.use with the same on value multiple times will overwrite previously configured validators on the specified property. Arguments:
  • *validator_spread - One or more validators passed as positional arguments to use.
  • validators - Keyword argument that allows explicitly setting a list of validators to use.
  • on - The property to validate. Valid options include “output”, “messages”, or a JSON path starting with ”$.”. Defaults to “output”.

get_validators

def get_validators(on: str) -> List[Validator]
The read-only counterpart to Guard.use. Retrieves the validators applied to the specified property. Arguments:
  • on - The property for which to return configured validators. Valid options include “output”, “messages”, or a JSON path starting with ”$.“.

validate

def validate(llm_output: str, *args, **kwargs) -> ValidationOutcome[OT]

to_runnable

def to_runnable() -> Runnable
Convert a Guard to a LangChain Runnable.

to_dict

def to_dict() -> Dict[str, Any]

json_function_calling_tool

def json_function_calling_tool(
        tools: Optional[list] = None) -> List[Dict[str, Any]]
Appends an OpenAI tool that specifies the output structure using JSON Schema for chat models.

from_dict

@classmethod
def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional["Guard"]

save

def save():
Upserts a Guard to your guardrails-api server. Only valid for servers using a database to persist Guards. Not valid for servers using a config.py file.

load

@classmethod
def load(
  name: str,
  *,
  api_key: Optional[str] = None,
  base_url: Optional[str] = None,
  history_max_length: Optional[int] = None
) -> Optional["Guard"]
Fetches and loads a Guard from your guardrails-api server. Arguments:
  • name - The name of an existing Guard to fetch from the server.
  • api_key - The api key to use for Authorization with the server. Defaults to the GUARDRAILS_API_KEY environment variable value or None.
  • base_url - The base url of the server. Defaults to the GUARDRAILS_BASE_URL environment variable value or http://localhost:8000.
  • history_max_length - The maximum length of the history stack to retain.

delete

def delete():
Deletes a Guard to your guardrails-api server. Only valid for servers using a database to persist Guards. Not valid for servers using a config.py file.

AsyncGuard

class AsyncGuard(Guard, Generic[OT])
The AsyncGuard class. This class one of the main entry point for using Guardrails. It is initialized from one of the following class methods:
  • for_rail
  • for_rail_string
  • for_pydantic
  • for_string
The __call__ method functions as a wrapper around LLM APIs. It takes in an Async LLM API, and optional prompt parameters, and returns the raw output stream from the LLM and the validated output stream.

for_pydantic

@classmethod
def for_pydantic(cls,
                 output_class: ModelOrListOfModels,
                 *,
                 messages: Optional[List[Dict]] = None,
                 reask_messages: Optional[List[Dict]] = None,
                 name: Optional[str] = None,
                 description: Optional[str] = None)

for_string

@classmethod
def for_string(cls,
               validators: Sequence[Validator],
               *,
               string_description: Optional[str] = None,
               messages: Optional[List[Dict]] = None,
               reask_messages: Optional[List[Dict]] = None,
               name: Optional[str] = None,
               description: Optional[str] = None)

from_dict

@classmethod
def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional["AsyncGuard"]

load

@classmethod
def load(
  name: str,
  *,
  api_key: Optional[str] = None,
  base_url: Optional[str] = None,
  history_max_length: Optional[int] = None
) -> Optional["AsyncGuard"]
Fetches and loads a Guard from your guardrails-api server.

use

def use(*validator_spread: Validator,
        validators: List[Validator] = [],
        on: str = "output") -> "AsyncGuard"

_call_

async def __call__(
    llm_api: Optional[Callable[..., Awaitable[Any]]] = None,
    *args,
    prompt_params: Optional[Dict] = None,
    num_reasks: Optional[int] = 1,
    messages: Optional[List[Dict]] = None,
    metadata: Optional[Dict] = None,
    full_schema_reask: Optional[bool] = None,
    **kwargs
) -> Union[
        ValidationOutcome[OT],
        Awaitable[ValidationOutcome[OT]],
        AsyncIterator[ValidationOutcome[OT]],
]
Call the LLM and validate the output. Pass an async LLM API to return a coroutine. Arguments:
  • llm_api - The LLM API to call (e.g. openai.completions.create or openai.chat.completions.create)
  • prompt_params - The parameters to pass to the prompt.format() method.
  • num_reasks - The max times to re-ask the LLM for invalid output.
  • messages - The message history to pass to the LLM.
  • metadata - Metadata to pass to the validators.
  • full_schema_reask - When reasking, whether to regenerate the full schema or just the incorrect values. Defaults to True if a base model is provided, False otherwise.
Returns: The raw text output from the LLM and the validated output.

parse

async def parse(llm_output: str,
                *args,
                metadata: Optional[Dict] = None,
                llm_api: Optional[Callable[..., Awaitable[Any]]] = None,
                num_reasks: Optional[int] = None,
                prompt_params: Optional[Dict] = None,
                full_schema_reask: Optional[bool] = None,
                **kwargs) -> Awaitable[ValidationOutcome[OT]]
Alternate flow to using AsyncGuard where the llm_output is known. Arguments:
  • llm_output - The output being parsed and validated.
  • metadata - Metadata to pass to the validators.
  • llm_api - The LLM API to call (e.g. openai.completions.create or openai.Completion.acreate)
  • num_reasks - The max times to re-ask the LLM for invalid output.
  • prompt_params - The parameters to pass to the prompt.format() method.
  • full_schema_reask - When reasking, whether to regenerate the full schema or just the incorrect values.
Returns: The validated response. This is either a string or a dictionary, determined by the object schema defined in the RAILspec.

validate

async def validate(llm_output: str, *args,
                   **kwargs) -> Awaitable[ValidationOutcome[OT]]

ValidationOutcome

class ValidationOutcome(IValidationOutcome, ArbitraryModel, Generic[OT])
The final output from a Guard execution. Attributes:
  • call_id - The id of the Call that produced this ValidationOutcome.
  • raw_llm_output - The raw, unchanged output from the LLM call.
  • validated_output - The validated, and potentially fixed, output from the LLM call after passing through validation.
  • reask - If validation continuously fails and all allocated reasks are used, this field will contain the final reask that would have been sent to the LLM if additional reasks were available.
  • validation_passed - A boolean to indicate whether or not the LLM output passed validation. If this is False, the validated_output may be invalid.
  • error - If the validation failed, this field will contain the error message

from_guard_history

@classmethod
def from_guard_history(cls, call: Call)
Create a ValidationOutcome from a history Call object.