Call
Guard.__call__, Guard.parse, or Guard.validate method.
Attributes:
iterationsStack[Iteration] - A stack of iterations for the initial validation round and one for each reask that occurs during a Call.inputsCallInputs - The inputs as passed in toGuard.__call__,Guard.parse, orGuard.validateexceptionOptional[Exception] - The exception that interrupted the Guard execution.
prompt_params
messages
compiled_messages
reask_messages
logs
tokens_consumed
prompt_tokens_consumed
completion_tokens_consumed
raw_outputs
parsed_outputs
validation_response
fixed_output
guarded_output
reasks
validator_logs
error
failed_validations
status
tree
Iteration
idstr - The unique identifier for the iteration.call_idstr - The unique identifier for the Call that this iteration is a part of.indexint - The index of this iteration within the Call.inputsInputs - The inputs for the validation loop.outputsOutputs - The outputs from the validation loop.
logs
tokens_consumed
prompt_tokens_consumed
completion_tokens_consumed
raw_output
parsed_output
validation_response
Call.guarded_output.”
guarded_output
reasks
validator_logs
error
exception
failed_validations
error_spans_in_output
status
Inputs
llm_apiOptional[PromptCallableBase] - The constructed class for calling the LLM.llm_outputOptional[str] - The string output from an external LLM call provided by the user via Guard.parse.messagesOptional[List[Dict]] - The message history provided by the user for chat model calls.prompt_paramsOptional[Dict] - The parameters provided by the user that will be formatted into the final LLM prompt.num_reasksOptional[int] - The total number of reasks allowed; user provided or defaulted.metadataOptional[Dict[str, Any]] - The metadata provided by the user to be used during validation.full_schema_reaskOptional[bool] - Whether reasks we performed across the entire schema or at the field level.streamOptional[bool] - Whether or not streaming was used.
Outputs
llm_response_infoOptional[LLMResponse] - Information from the LLM responseraw_outputOptional[str] - The exact output from the LLM.parsed_outputOptional[Union[str, List, Dict]] - The output parsed from the LLM response as it was passed into validation.validation_responseOptional[Union[str, ReAsk, List, Dict]] - The response from the validation process.guarded_outputOptional[Union[str, List, Dict]] - Any valid values after undergoing validation. Some values may be “fixed” values that were corrected during validation. This property may be a partial structure if field level reasks occur.reasksList[ReAsk] - Information from the validation process used to construct a ReAsk to the LLM on validation failure. Default [].validator_logsList[ValidatorLogs] - The results of each individual validation. Default [].errorOptional[str] - The error message from any exception that raised and interrupted the process.exceptionOptional[Exception] - The exception that interrupted the process.
failed_validations
error_spans_in_output
status
CallInputs
llm_apiOptional[Callable[[Any], Awaitable[Any]]] - The LLM function provided by the user during Guard.call or Guard.parse.messagesOptional[dict[str, str]] - The messages as provided by the user.argsList[Any] - Additional arguments for the LLM as provided by the user. Default [].kwargsDict[str, Any] - Additional keyword-arguments for the LLM as provided by the user. Default .