Skip to main content

Use on_fail Actions

on_fail actions are instructions to Guardrails that direct action when a validator fails. They are set at the validator level, not the guard level.

The full set of on_fail actions are avialable in the Error remediation concepts doc, and will not all be covered here.

Instead, this interactive doc will serve to guide you on how and when to use different on_fail actions.

# setup, run imports
from guardrails import Guard, install
try:
from guardrails.hub import DetectPII
except ImportError:
install("hub://guardrails/detect_pii")
from guardrails.hub import DetectPII
    /home/zayd/workspace/testbench/.venv/lib/python3.11/site-packages/torch/cuda/__init__.py:654: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")

Raise Exceptions

If you do not want to change the outputs of an LLM when validation fails, you can instead wrap your guard in a try/catch block.

This works particularly well for input validation.

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail="exception")
)

try:
guard.validate("Hello, my name is John Doe and my email is john.doe@example.com")
except Exception as e:
print("output validation failed")
print(e)


# on input validation

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail="exception"),
on="msg_history"
)


# on input validation

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail="exception"),
on="msg_history"
)

try:
guard(
model='gpt-4o-mini',
messages=[{
"role": "user",
"content": "Hello, my name is John Doe and my email is john.doe@example.com"
}]
)
except Exception as e:
print("input validation failed")
print(e)
    /home/zayd/workspace/testbench/.venv/lib/python3.11/site-packages/guardrails/validator_service/__init__.py:85: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
warnings.warn(


output validation failed
Validation failed for field with errors: The following text in your response contains PII:
Hello, my name is John Doe and my email is john.doe@example.com
input validation failed
Validation failed for field with errors: The following text in your response contains PII:
Hello, my name is John Doe and my email is john.doe@example.com

noop to log and continue

If you want to log the error and continue, you can use the noop action. This is useful for when you want to log the error, but not stop the LLM from running.

# the on_fail parameter does not have to be set, as the default is "noop"

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail="noop")
)

res = guard.validate("Hello, my name is John Doe and my email is john.doe@example.com")
print("guarded just fine")
print("Check if validation passed: ", res.validation_passed)
print("Show that the validated text and raw text remain the same: ",
res.validated_output == res.raw_llm_output)
    guarded just fine
Check if validation passed: False
Show that the validated text and raw text remain the same: True

fix to automatically fix the error

Note, not all validators implement a 'fix' value. You can view the FailResult implementation in a validator to see if it has a fix value.

Here's an example that shows how Detect PII is written to return anonymized text as a fix value

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail="fix")
)

res = guard.validate("Hello, my name is John Doe and my email is john.doe@example.com")

print("Check if validated_output is valid text: ", res.validation_passed)
print("Scrubbed text: ", res.validated_output)
    /home/zayd/workspace/testbench/.venv/lib/python3.11/site-packages/guardrails/validator_service/__init__.py:85: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
warnings.warn(


Check if validated_output is valid text: True
Scrubbed text: Hello, my name is <PERSON> and my email is <EMAIL_ADDRESS>

reask to automatically ask for an output that passes validation

This reask prompt is computed from the validators themselves. It's an interpolation

In order for the reask prompt to work, the following additional params must be provided:

  • messages
  • llm_api OR model
guard = Guard().use(
DetectPII(pii_entities="pii", on_fail="reask"),
)

res = guard(
messages=[{
"role": "user",
"content": "Make up a fake person and email address",
}],
model='gpt-4o-mini',
num_reasks=1
)

print("Validated output: ", res.validated_output)
print("Number of reasks: ", len(guard.history.last.iterations) - 1)
    Validated output:  Sure! Here's a fictional person without any personal identifiable information:

**Name:** <PERSON>
**Email:** <EMAIL_ADDRESS>

Feel free to use this for any creative purposes!
Number of reasks: 1

custom to anything else

# A custom on_fail can be as simple as a function

def custom_on_fail(value, fail_result):
# This will turn up in validated output
return "CUSTOM LOGIC COMPLETE!"

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail=custom_on_fail),
)
res = guard.validate("Hello, my name is John Doe and my email is john.doe@example.com")

print(res.validated_output)
    /home/zayd/workspace/testbench/.venv/lib/python3.11/site-packages/guardrails/validator_service/__init__.py:85: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
warnings.warn(


CUSTOM LOGIC COMPLETE!
# Of course, the function also has access to the fail_result and source text, 
# so interesting logic/formatting over those is also possible
# Here, we show the specific char spans where the validator detected malfeasance

def custom_on_fail(value, fail_result):
return f"""
String validated: {value}

Reasons it failed: {fail_result.error_spans}
"""

guard = Guard().use(
DetectPII(pii_entities="pii", on_fail=custom_on_fail),
)
res = guard.validate("Hello, my name is John Doe and my email is john.doe@example.com")

print(res.validated_output)
    
String validated: Hello, my name is John Doe and my email is john.doe@example.com

Reason it failed: [ErrorSpan(start=18, end=26, reason='PII detected in John Doe')]