Guard
Guard().use(...)Guard().use_many(...)Guard.for_string(...)Guard.for_pydantic(...)Guard.for_rail(...)Guard.for_rail_string(...)
__call__ method functions as a wrapper around LLM APIs. It takes in an LLM API, and optional prompt parameters, and returns a ValidationOutcome class that contains the raw output from the LLM, the validated output, as well as other helpful information.
_init_
configure
num_reasksint, optional - The max times to re-ask the LLM if validation fails. Defaults to None.tracerTracer, optional - An OpenTelemetry tracer to use for sending traces to your OpenTelemetry sink. Defaults to None.allow_metrics_collectionbool, optional - Whether to allow Guardrails to collect anonymous metrics. Defaults to None, and falls back to waht is set via theguardrails configurecommand.
for_rail
.rail file to specify the output schema, prompt, etc.
Arguments:
rail_file- The path to the.railfile.num_reasksint, optional - The max times to re-ask the LLM if validation fails. DeprecatedtracerTracer, optional - An OpenTelemetry tracer to use for metrics and traces. Defaults to None.namestr, optional - A unique name for this Guard. Defaults togr-+ the object id.descriptionstr, optional - A description for this Guard. Defaults to None.
Guard class.
for_rail_string
.rail string to specify the output schema, prompt, etc..
Arguments:
rail_string- The.railstring.num_reasksint, optional - The max times to re-ask the LLM if validation fails. DeprecatedtracerTracer, optional - An OpenTelemetry tracer to use for metrics and traces. Defaults to None.namestr, optional - A unique name for this Guard. Defaults togr-+ the object id.descriptionstr, optional - A description for this Guard. Defaults to None.
Guard class.
for_pydantic
output_class- (Union[Type[BaseModel], List[Type[BaseModel]]]): The pydantic model that describes the desired structure of the output.messagesList[Dict], optional - A list of messages to give to the llm. Defaults to None.reask_messagesList[Dict], optional - A list of messages to use during reasks. Defaults to None.num_reasksint, optional - The max times to re-ask the LLM if validation fails. DeprecatedtracerTracer, optional - An OpenTelemetry tracer to use for metrics and traces. Defaults to None.namestr, optional - A unique name for this Guard. Defaults togr-+ the object id.descriptionstr, optional - A description for this Guard. Defaults to None.output_formatterstr | Formatter, optional - ‘none’ (default), ‘jsonformer’, or a Guardrails Formatter.
for_string
validators- (List[Validator]): The list of validators to apply to the string output.string_descriptionstr, optional - A description for the string to be generated. Defaults to None.messagesList[Dict], optional - A list of messages to pass to llm. Defaults to None.reask_messagesList[Dict], optional - A list of messages to use during reasks. Defaults to None.num_reasksint, optional - The max times to re-ask the LLM if validation fails. DeprecatedtracerTracer, optional - An OpenTelemetry tracer to use for metrics and traces. Defaults to None.namestr, optional - A unique name for this Guard. Defaults togr-+ the object id.descriptionstr, optional - A description for this Guard. Defaults to None.
_call_
llm_api- The LLM API to call (e.g. openai.completions.create or openai.Completion.acreate)prompt_params- The parameters to pass to the prompt.format() method.num_reasks- The max times to re-ask the LLM for invalid output.messages- The message history to pass to the LLM.metadata- Metadata to pass to the validators.full_schema_reask- When reasking, whether to regenerate the full schema or just the incorrect values. Defaults toTrueif a base model is provided,Falseotherwise.
parse
llm_output- The output being parsed and validated.metadata- Metadata to pass to the validators.llm_api- The LLM API to call (e.g. openai.completions.create or openai.Completion.acreate)num_reasks- The max times to re-ask the LLM for invalid output.prompt_params- The parameters to pass to the prompt.format() method.full_schema_reask- When reasking, whether to regenerate the full schema or just the incorrect values.
error_spans_in_output
use
use
use
- The output of an LLM request
- The message history
validator- The validator to use. Either the class or an instance.on- The part of the LLM request to validate. Defaults to “output”.
use_many
use_many
use_many
validate
to_runnable
to_dict
json_function_calling_tool
from_dict
AsyncGuard
for_railfor_rail_stringfor_pydanticfor_string
__call__ method functions as a wrapper around LLM APIs. It takes in an Async LLM API, and optional prompt parameters, and returns the raw output stream from the LLM and the validated output stream.
for_pydantic
for_string
from_dict
use
use_many
_call_
llm_api- The LLM API to call (e.g. openai.completions.create or openai.chat.completions.create)prompt_params- The parameters to pass to the prompt.format() method.num_reasks- The max times to re-ask the LLM for invalid output.messages- The message history to pass to the LLM.metadata- Metadata to pass to the validators.full_schema_reask- When reasking, whether to regenerate the full schema or just the incorrect values. Defaults toTrueif a base model is provided,Falseotherwise.
parse
llm_output- The output being parsed and validated.metadata- Metadata to pass to the validators.llm_api- The LLM API to call (e.g. openai.completions.create or openai.Completion.acreate)num_reasks- The max times to re-ask the LLM for invalid output.prompt_params- The parameters to pass to the prompt.format() method.full_schema_reask- When reasking, whether to regenerate the full schema or just the incorrect values.
validate
ValidationOutcome
call_id- The id of the Call that produced this ValidationOutcome.raw_llm_output- The raw, unchanged output from the LLM call.validated_output- The validated, and potentially fixed, output from the LLM call after passing through validation.reask- If validation continuously fails and all allocated reasks are used, this field will contain the final reask that would have been sent to the LLM if additional reasks were available.validation_passed- A boolean to indicate whether or not the LLM output passed validation. If this is False, the validated_output may be invalid.error- If the validation failed, this field will contain the error message