The Guardrails Hub

Search and explore vast world of guardrails validators through lightning-fast search

70 validators
    Validators
    70 / 70
    Arize Dataset Embeddings
    Validates that user-generated input does not match the dataset of jailbreak embeddings from Arize AI.
    string
    Brand risk
    ML
    Ban List
    Validates that the output does not contain banned words, using fuzzy search.
    string
    Brand risk
    ML
    Bespoke MiniCheck
    Validates that the LLM-generated text is supported by the provided context using BespokeLabs.AI's MiniCheck API.
    string
    Brand risk
    +1 more
    ML
    Bias Check
    Validates that the text is free from biases related to age, gender, sex, ethnicity, religion, etc.
    string
    Brand risk
    ML
    Competitor Check
    Flags mentions of competitors. Fixes responses by filtering out competitor names.
    string
    Brand risk
    ML
    Correct Language
    Validate that an LLM-generated text is in the expected language. If the text is not in the expected language, the validator will attempt to translate it to the expected language.
    string
    Etiquette
    ML
    Cucumber Expression Match
    Validates that the input string matches a specified cucumber expression.
    string
    Brand risk
    ML
    Detect PII
    Detects personally identifiable information (PII) in text, using Microsoft Presidio.
    string
    Data Leakage
    ML
    Extracted Summary Sentences Match
    This validator checks if the extracted summary sentences match the original document.
    string
    Factuality
    +1 more
    ML
    Extractive Summary
    Uses fuzzy matching to detect if some text is a summary of a document.
    string
    Factuality
    Gibberish Text
    A Guardrails AI validator to detect gibberish text.
    string
    Brand risk
    +3 more
    ML
    High Quality Translation
    A validator that checks if a translation is of high quality.
    string
    Etiquette
    +1 more
    ML
    Llama Guard
    A llama based validator which checks whether a given prompt is safe/unsafe by specifying a set of policies and lists the violating policies when applicable.
    string
    Etiquette
    ML
    LLM RAG Evaluator
    This validator uses an LLM Judge to decide whether the LLM response is acceptable in a RAG application.
    string
    Factuality
    +1 more
    LLM
    Logic Check
    Validates logical consistency and detects logical fallacies in the model output. Attempts to correct logical fallacies if found.
    string
    Brand risk
    ML
    NSFW Text
    A Guardrails AI validator to detect NSFW text
    string
    Etiquette
    ML
    Profanity Free
    Checks for profanity in text, using the alt-profanity-check library.
    string
    Etiquette
    +1 more
    ML
    Provenance Embeddings
    Compares embeddings of generated and source texts to calculate provenance.
    string
    Factuality
    +1 more
    ML
    Provenance LLM
    A validator for ensuring the factuality and reducing brand risk in generated content.
    string
    Factuality
    +2 more
    ML
    QA Relevance LLM Eval
    Makes a second request to the LLM, asking it if its original response was relevant to the prompt.
    string
    Jailbreaking
    +1 more
    LLM
    Relevancy Evaluator
    Validates that the reference text contains information relevant to answering the original question.
    string
    Brand risk
    ML
    Restrict to Topic
    Determines if the text pertains to a specified topic.
    string
    Etiquette
    +3 more
    LLM
    Saliency Check
    Checks if a generated summary covers topics present in a source document.
    string
    Factuality
    LLM
    Secrets Present
    Detects the secrets present in text by matching against common patterns for API keys and other sensitive information.
    string
    code
    +2 more
    Rule
    Shield Gemma
    A Gemma based validator for moderating user prompts to guard against harmful content by specifying a policy.
    string
    Etiquette
    ML
    Similar To Document
    Checks if some generated text is similar to a provided document.
    string
    Factuality
    ML
    Similar To Previous Values
    Checks if a value is similar to a list of previously known correct values.
    string
    integer
    +1 more
    ML
    Toxic Language
    Identifies and flags toxic language in text to ensure communications remain professional and appropriate.
    string
    Etiquette
    ML
    Wiki Provenance
    A Guardrails AI validator that detects and removes hallucinated text based off Wikipedia
    string
    Factuality
    Contains String
    A Guardrails AI validator to check if the LLM-generated text contains a substring.
    string
    Formatting
    CSV Validator
    Checks the CSV file for issues, including mismatched column lengths and inconsistent quote delimiters.
    csv
    Formatting
    +1 more
    Detect Jailbreak
    Detects attempts to circumvent safeguards in model conditioning.
    string
    Brand risk
    +2 more
    ML
    Endpoint Is Reachable
    Checks if an endpoint can be reached by making a request to it.
    string
    Code Exploits
    +1 more
    Ends With
    Check if a string or list ends with a specified string or list.
    list
    string
    +1 more
    Exclude SQL Predicates
    This rule checks for the use of particular SQL predicates in the query. It is important to exclude SQL predicates from the query to prevent SQL injection attacks.
    string
    sql
    +3 more
    Financial Tone
    Validates that an LLM-generated output (in a financial context) maintains a particular tone.
    string
    Etiquette
    ML
    Grounded AI Hallucination
    A Grounded AI validator that detects hallucinated text
    string
    Factuality
    Guardrails PII
    Detects personally identifiable information (PII) in text.
    string
    Data Leakage
    ML
    Has Url
    Ensure content contains a url.
    string
    Code Exploits
    +1 more
    LLM Critic
    Grade the generated response based on provided criteria.
    string
    Factuality
    +1 more
    LLM
    Lowercase
    Passes when totally lowercase.
    string
    Formatting
    Mentions Drugs
    Validates that the generated text does not contain any drug names
    string
    Etiquette
    ML
    MLcube RAG Context Evaluator
    A validator that scores retrieved RAG context for relevance and usefulness to the user query.
    string
    Factuality
    One Line
    This validator checks if the input is a single line of text.
    string
    Formatting
    Politeness Check
    Ensure generated output is polite.
    string
    Etiquette
    LLM
    Prompt Injection Detector
    A Guardrails validator that scores prompts for injection attempts via a secondary LLM.
    string
    Jailbreaking
    LLM
    Quotes Price
    Validates that the generated text contains a price quote
    string
    Brand risk
    ML
    Reading Level
    Parses text to find its readability as a US grade level number (0-12).
    string
    Etiquette
    +1 more
    Reading Time
    Ensures that any generated text is less than a maximum expected reading time.
    string
    Formatting
    Redundant Sentences
    Identifies redundant sentences in text using fuzzy matching.
    string
    Etiquette
    ML
    Regex Match
    Ensure content matches a provided regular expression. This can be used to validate content such as email addresses, phone numbers, and more.
    string
    Formatting
    Response Evaluator
    Evaluate generated output using a provided question.
    string
    Factuality
    +2 more
    LLM
    Responsiveness Check
    Ensure generated output is polite.
    string
    Factuality
    LLM
    Sensitive Topic
    A Guardrails AI validator that detects sensitive topics in text.
    string
    Etiquette
    ML
    Sql Column Presence
    Checks that schema columns are present in a SQL query.
    string
    sql
    +3 more
    Toxic Language LLM
    Detects toxic language in LLM-generated text using an LLM as the detection backbone. Evaluates text across seven toxicity categories: toxicity, severe toxicity, obscene, threat, insult, identity attack, and sexual explicit content.
    string
    Etiquette
    LLM
    Two Words
    Passes when value is *exactly* two words.
    string
    Formatting
    Unusual Prompt
    A Guardrails AI input validator that validates a prompt for unusualness and trickery.
    string
    Etiquette
    +1 more
    LLM
    Uppercase
    Passes when totally uppercase.
    string
    Formatting
    Valid Address
    Verifies an LLM-generated address using Google Maps' Address Validation API.
    string
    Formatting
    +1 more
    Valid Choices
    Checks if a given string is a valid choice from a list of choices.
    string
    Formatting
    Valid HTML
    Guardrails validator that checks for HTML parseability.
    string
    Formatting
    Valid JSON
    Ensure content is parseable as valid JSON.
    string
    object
    +3 more
    Valid Length
    Ensures the length of a string or list falls between a minimum and maximum.
    string
    Formatting
    Valid OpenAPI Specification
    Ensures that a generated output is a valid OpenAPI Specification.
    string
    object
    +2 more
    Valid Python
    Validates whether the given Python code is syntactically correct.
    python
    Invalid Code
    Valid Range
    Assess whether a generated number is between a maximum and minimum value.
    integer
    float
    +1 more
    Valid SQL
    Validates whether the given SQL code is syntactically correct using. Optionally accepts a database schema to validate against using SQLAlchemy.
    sql
    Invalid Code
    +1 more
    Valid URL
    Validates that text is a syntactically-valid URL
    string
    Formatting
    Web Sanitization
    Scans LLM outputs for strings that could cause browser script execution downstream.
    string
    Code Exploits
    0 Validators selected