Guardrails AI
Hub
Blog
Docs
Loading ...
The Guardrails Hub
Search and explore vast world of guardrails validators through lightning-fast search
70 validators
Validators
70 / 70
Arize Dataset Embeddings
Select
Validates that user-generated input does not match the dataset of jailbreak embeddings from Arize AI.
string
Brand risk
ML
Ban List
Select
Validates that the output does not contain banned words, using fuzzy search.
string
Brand risk
ML
Bespoke MiniCheck
Select
Validates that the LLM-generated text is supported by the provided context using BespokeLabs.AI's MiniCheck API.
string
Brand risk
+1 more
ML
Bias Check
Select
Validates that the text is free from biases related to age, gender, sex, ethnicity, religion, etc.
string
Brand risk
ML
Competitor Check
Select
Flags mentions of competitors. Fixes responses by filtering out competitor names.
string
Brand risk
ML
Correct Language
Select
Validate that an LLM-generated text is in the expected language. If the text is not in the expected language, the validator will attempt to translate it to the expected language.
string
Etiquette
ML
Cucumber Expression Match
Select
Validates that the input string matches a specified cucumber expression.
string
Brand risk
ML
Detect PII
Select
Detects personally identifiable information (PII) in text, using Microsoft Presidio.
string
Data Leakage
ML
Extracted Summary Sentences Match
Select
This validator checks if the extracted summary sentences match the original document.
string
Factuality
+1 more
ML
Extractive Summary
Select
Uses fuzzy matching to detect if some text is a summary of a document.
string
Factuality
Gibberish Text
Select
A Guardrails AI validator to detect gibberish text.
string
Brand risk
+3 more
ML
High Quality Translation
Select
A validator that checks if a translation is of high quality.
string
Etiquette
+1 more
ML
Llama Guard
Select
A llama based validator which checks whether a given prompt is safe/unsafe by specifying a set of policies and lists the violating policies when applicable.
string
Etiquette
ML
LLM RAG Evaluator
Select
This validator uses an LLM Judge to decide whether the LLM response is acceptable in a RAG application.
string
Factuality
+1 more
LLM
Logic Check
Select
Validates logical consistency and detects logical fallacies in the model output. Attempts to correct logical fallacies if found.
string
Brand risk
ML
NSFW Text
Select
A Guardrails AI validator to detect NSFW text
string
Etiquette
ML
Profanity Free
Select
Checks for profanity in text, using the alt-profanity-check library.
string
Etiquette
+1 more
ML
Provenance Embeddings
Select
Compares embeddings of generated and source texts to calculate provenance.
string
Factuality
+1 more
ML
Provenance LLM
Select
A validator for ensuring the factuality and reducing brand risk in generated content.
string
Factuality
+2 more
ML
QA Relevance LLM Eval
Select
Makes a second request to the LLM, asking it if its original response was relevant to the prompt.
string
Jailbreaking
+1 more
LLM
Relevancy Evaluator
Select
Validates that the reference text contains information relevant to answering the original question.
string
Brand risk
ML
Restrict to Topic
Select
Determines if the text pertains to a specified topic.
string
Etiquette
+3 more
LLM
Saliency Check
Select
Checks if a generated summary covers topics present in a source document.
string
Factuality
LLM
Secrets Present
Select
Detects the secrets present in text by matching against common patterns for API keys and other sensitive information.
string
code
+2 more
Rule
Shield Gemma
Select
A Gemma based validator for moderating user prompts to guard against harmful content by specifying a policy.
string
Etiquette
ML
Similar To Document
Select
Checks if some generated text is similar to a provided document.
string
Factuality
ML
Similar To Previous Values
Select
Checks if a value is similar to a list of previously known correct values.
string
integer
+1 more
ML
Toxic Language
Select
Identifies and flags toxic language in text to ensure communications remain professional and appropriate.
string
Etiquette
ML
Wiki Provenance
Select
A Guardrails AI validator that detects and removes hallucinated text based off Wikipedia
string
Factuality
Contains String
Select
A Guardrails AI validator to check if the LLM-generated text contains a substring.
string
Formatting
CSV Validator
Select
Checks the CSV file for issues, including mismatched column lengths and inconsistent quote delimiters.
csv
Formatting
+1 more
Detect Jailbreak
Select
Detects attempts to circumvent safeguards in model conditioning.
string
Brand risk
+2 more
ML
Endpoint Is Reachable
Select
Checks if an endpoint can be reached by making a request to it.
string
Code Exploits
+1 more
Ends With
Select
Check if a string or list ends with a specified string or list.
list
string
+1 more
Exclude SQL Predicates
Select
This rule checks for the use of particular SQL predicates in the query. It is important to exclude SQL predicates from the query to prevent SQL injection attacks.
string
sql
+3 more
Financial Tone
Select
Validates that an LLM-generated output (in a financial context) maintains a particular tone.
string
Etiquette
ML
Grounded AI Hallucination
Select
A Grounded AI validator that detects hallucinated text
string
Factuality
Guardrails PII
Select
Detects personally identifiable information (PII) in text.
string
Data Leakage
ML
Has Url
Select
Ensure content contains a url.
string
Code Exploits
+1 more
LLM Critic
Select
Grade the generated response based on provided criteria.
string
Factuality
+1 more
LLM
Lowercase
Select
Passes when totally lowercase.
string
Formatting
Mentions Drugs
Select
Validates that the generated text does not contain any drug names
string
Etiquette
ML
MLcube RAG Context Evaluator
Select
A validator that scores retrieved RAG context for relevance and usefulness to the user query.
string
Factuality
One Line
Select
This validator checks if the input is a single line of text.
string
Formatting
Politeness Check
Select
Ensure generated output is polite.
string
Etiquette
LLM
Prompt Injection Detector
Select
A Guardrails validator that scores prompts for injection attempts via a secondary LLM.
string
Jailbreaking
LLM
Quotes Price
Select
Validates that the generated text contains a price quote
string
Brand risk
ML
Reading Level
Select
Parses text to find its readability as a US grade level number (0-12).
string
Etiquette
+1 more
Reading Time
Select
Ensures that any generated text is less than a maximum expected reading time.
string
Formatting
Redundant Sentences
Select
Identifies redundant sentences in text using fuzzy matching.
string
Etiquette
ML
Regex Match
Select
Ensure content matches a provided regular expression. This can be used to validate content such as email addresses, phone numbers, and more.
string
Formatting
Response Evaluator
Select
Evaluate generated output using a provided question.
string
Factuality
+2 more
LLM
Responsiveness Check
Select
Ensure generated output is polite.
string
Factuality
LLM
Sensitive Topic
Select
A Guardrails AI validator that detects sensitive topics in text.
string
Etiquette
ML
Sql Column Presence
Select
Checks that schema columns are present in a SQL query.
string
sql
+3 more
Toxic Language LLM
Select
Detects toxic language in LLM-generated text using an LLM as the detection backbone. Evaluates text across seven toxicity categories: toxicity, severe toxicity, obscene, threat, insult, identity attack, and sexual explicit content.
string
Etiquette
LLM
Two Words
Select
Passes when value is *exactly* two words.
string
Formatting
Unusual Prompt
Select
A Guardrails AI input validator that validates a prompt for unusualness and trickery.
string
Etiquette
+1 more
LLM
Uppercase
Select
Passes when totally uppercase.
string
Formatting
Valid Address
Select
Verifies an LLM-generated address using Google Maps' Address Validation API.
string
Formatting
+1 more
Valid Choices
Select
Checks if a given string is a valid choice from a list of choices.
string
Formatting
Valid HTML
Select
Guardrails validator that checks for HTML parseability.
string
Formatting
Valid JSON
Select
Ensure content is parseable as valid JSON.
string
object
+3 more
Valid Length
Select
Ensures the length of a string or list falls between a minimum and maximum.
string
Formatting
Valid OpenAPI Specification
Select
Ensures that a generated output is a valid OpenAPI Specification.
string
object
+2 more
Valid Python
Select
Validates whether the given Python code is syntactically correct.
python
Invalid Code
Valid Range
Select
Assess whether a generated number is between a maximum and minimum value.
integer
float
+1 more
Valid SQL
Select
Validates whether the given SQL code is syntactically correct using. Optionally accepts a database schema to validate against using SQLAlchemy.
sql
Invalid Code
+1 more
Valid URL
Select
Validates that text is a syntactically-valid URL
string
Formatting
Web Sanitization
Select
Scans LLM outputs for strings that could cause browser script execution downstream.
string
Code Exploits
0 Validators selected
Generate Code
Filters
Use Cases
Chatbots
Customer Support
Structured data
RAG
Summarization
CodeGen
Text2SQL
Risk Category
Etiquette
Brand risk
Factuality
Formatting
Invalid Code
Jailbreaking
Code Exploits
Data Leakage
Infrastructure Requirements
ML
LLM
NA
Rule
Content Type
string
object
list
integer
float
sql
code
csv
python
Certification
Guardrails Certified
Language
en
The Guardrails Hub
Search and explore vast world of guardrails validators through lightning-fast search
70 validators
Validators
70 / 70
Arize Dataset Embeddings
Select
Validates that user-generated input does not match the dataset of jailbreak embeddings from Arize AI.
string
Brand risk
ML
Ban List
Select
Validates that the output does not contain banned words, using fuzzy search.
string
Brand risk
ML
Bespoke MiniCheck
Select
Validates that the LLM-generated text is supported by the provided context using BespokeLabs.AI's MiniCheck API.
string
Brand risk
+1 more
ML
Bias Check
Select
Validates that the text is free from biases related to age, gender, sex, ethnicity, religion, etc.
string
Brand risk
ML
Competitor Check
Select
Flags mentions of competitors. Fixes responses by filtering out competitor names.
string
Brand risk
ML
Correct Language
Select
Validate that an LLM-generated text is in the expected language. If the text is not in the expected language, the validator will attempt to translate it to the expected language.
string
Etiquette
ML
Cucumber Expression Match
Select
Validates that the input string matches a specified cucumber expression.
string
Brand risk
ML
Detect PII
Select
Detects personally identifiable information (PII) in text, using Microsoft Presidio.
string
Data Leakage
ML
Extracted Summary Sentences Match
Select
This validator checks if the extracted summary sentences match the original document.
string
Factuality
+1 more
ML
Extractive Summary
Select
Uses fuzzy matching to detect if some text is a summary of a document.
string
Factuality
Gibberish Text
Select
A Guardrails AI validator to detect gibberish text.
string
Brand risk
+3 more
ML
High Quality Translation
Select
A validator that checks if a translation is of high quality.
string
Etiquette
+1 more
ML
Llama Guard
Select
A llama based validator which checks whether a given prompt is safe/unsafe by specifying a set of policies and lists the violating policies when applicable.
string
Etiquette
ML
LLM RAG Evaluator
Select
This validator uses an LLM Judge to decide whether the LLM response is acceptable in a RAG application.
string
Factuality
+1 more
LLM
Logic Check
Select
Validates logical consistency and detects logical fallacies in the model output. Attempts to correct logical fallacies if found.
string
Brand risk
ML
NSFW Text
Select
A Guardrails AI validator to detect NSFW text
string
Etiquette
ML
Profanity Free
Select
Checks for profanity in text, using the alt-profanity-check library.
string
Etiquette
+1 more
ML
Provenance Embeddings
Select
Compares embeddings of generated and source texts to calculate provenance.
string
Factuality
+1 more
ML
Provenance LLM
Select
A validator for ensuring the factuality and reducing brand risk in generated content.
string
Factuality
+2 more
ML
QA Relevance LLM Eval
Select
Makes a second request to the LLM, asking it if its original response was relevant to the prompt.
string
Jailbreaking
+1 more
LLM
Relevancy Evaluator
Select
Validates that the reference text contains information relevant to answering the original question.
string
Brand risk
ML
Restrict to Topic
Select
Determines if the text pertains to a specified topic.
string
Etiquette
+3 more
LLM
Saliency Check
Select
Checks if a generated summary covers topics present in a source document.
string
Factuality
LLM
Secrets Present
Select
Detects the secrets present in text by matching against common patterns for API keys and other sensitive information.
string
code
+2 more
Rule
Shield Gemma
Select
A Gemma based validator for moderating user prompts to guard against harmful content by specifying a policy.
string
Etiquette
ML
Similar To Document
Select
Checks if some generated text is similar to a provided document.
string
Factuality
ML
Similar To Previous Values
Select
Checks if a value is similar to a list of previously known correct values.
string
integer
+1 more
ML
Toxic Language
Select
Identifies and flags toxic language in text to ensure communications remain professional and appropriate.
string
Etiquette
ML
Wiki Provenance
Select
A Guardrails AI validator that detects and removes hallucinated text based off Wikipedia
string
Factuality
Contains String
Select
A Guardrails AI validator to check if the LLM-generated text contains a substring.
string
Formatting
CSV Validator
Select
Checks the CSV file for issues, including mismatched column lengths and inconsistent quote delimiters.
csv
Formatting
+1 more
Detect Jailbreak
Select
Detects attempts to circumvent safeguards in model conditioning.
string
Brand risk
+2 more
ML
Endpoint Is Reachable
Select
Checks if an endpoint can be reached by making a request to it.
string
Code Exploits
+1 more
Ends With
Select
Check if a string or list ends with a specified string or list.
list
string
+1 more
Exclude SQL Predicates
Select
This rule checks for the use of particular SQL predicates in the query. It is important to exclude SQL predicates from the query to prevent SQL injection attacks.
string
sql
+3 more
Financial Tone
Select
Validates that an LLM-generated output (in a financial context) maintains a particular tone.
string
Etiquette
ML
Grounded AI Hallucination
Select
A Grounded AI validator that detects hallucinated text
string
Factuality
Guardrails PII
Select
Detects personally identifiable information (PII) in text.
string
Data Leakage
ML
Has Url
Select
Ensure content contains a url.
string
Code Exploits
+1 more
LLM Critic
Select
Grade the generated response based on provided criteria.
string
Factuality
+1 more
LLM
Lowercase
Select
Passes when totally lowercase.
string
Formatting
Mentions Drugs
Select
Validates that the generated text does not contain any drug names
string
Etiquette
ML
MLcube RAG Context Evaluator
Select
A validator that scores retrieved RAG context for relevance and usefulness to the user query.
string
Factuality
One Line
Select
This validator checks if the input is a single line of text.
string
Formatting
Politeness Check
Select
Ensure generated output is polite.
string
Etiquette
LLM
Prompt Injection Detector
Select
A Guardrails validator that scores prompts for injection attempts via a secondary LLM.
string
Jailbreaking
LLM
Quotes Price
Select
Validates that the generated text contains a price quote
string
Brand risk
ML
Reading Level
Select
Parses text to find its readability as a US grade level number (0-12).
string
Etiquette
+1 more
Reading Time
Select
Ensures that any generated text is less than a maximum expected reading time.
string
Formatting
Redundant Sentences
Select
Identifies redundant sentences in text using fuzzy matching.
string
Etiquette
ML
Regex Match
Select
Ensure content matches a provided regular expression. This can be used to validate content such as email addresses, phone numbers, and more.
string
Formatting
Response Evaluator
Select
Evaluate generated output using a provided question.
string
Factuality
+2 more
LLM
Responsiveness Check
Select
Ensure generated output is polite.
string
Factuality
LLM
Sensitive Topic
Select
A Guardrails AI validator that detects sensitive topics in text.
string
Etiquette
ML
Sql Column Presence
Select
Checks that schema columns are present in a SQL query.
string
sql
+3 more
Toxic Language LLM
Select
Detects toxic language in LLM-generated text using an LLM as the detection backbone. Evaluates text across seven toxicity categories: toxicity, severe toxicity, obscene, threat, insult, identity attack, and sexual explicit content.
string
Etiquette
LLM
Two Words
Select
Passes when value is *exactly* two words.
string
Formatting
Unusual Prompt
Select
A Guardrails AI input validator that validates a prompt for unusualness and trickery.
string
Etiquette
+1 more
LLM
Uppercase
Select
Passes when totally uppercase.
string
Formatting
Valid Address
Select
Verifies an LLM-generated address using Google Maps' Address Validation API.
string
Formatting
+1 more
Valid Choices
Select
Checks if a given string is a valid choice from a list of choices.
string
Formatting
Valid HTML
Select
Guardrails validator that checks for HTML parseability.
string
Formatting
Valid JSON
Select
Ensure content is parseable as valid JSON.
string
object
+3 more
Valid Length
Select
Ensures the length of a string or list falls between a minimum and maximum.
string
Formatting
Valid OpenAPI Specification
Select
Ensures that a generated output is a valid OpenAPI Specification.
string
object
+2 more
Valid Python
Select
Validates whether the given Python code is syntactically correct.
python
Invalid Code
Valid Range
Select
Assess whether a generated number is between a maximum and minimum value.
integer
float
+1 more
Valid SQL
Select
Validates whether the given SQL code is syntactically correct using. Optionally accepts a database schema to validate against using SQLAlchemy.
sql
Invalid Code
+1 more
Valid URL
Select
Validates that text is a syntactically-valid URL
string
Formatting
Web Sanitization
Select
Scans LLM outputs for strings that could cause browser script execution downstream.
string
Code Exploits
0 Validators selected
Generate Code