Dec 20, 2023
Announcing Guardrails AI 0.3.0

🎉Exciting News! The team has been hard at work and is excited to announce that the latest release of guardrails, v0.3.0, is now live!
TL;DR
🎉 New Features 🎉
Streaming!
Anthropic and Hugging Face models support
Input Validation
New validators!
OnTopic validator
Competitor check validator
ToxicLanguage validator
🔧 Improvements 🔧
Improved isHighQualityTranslation validator
👏🏽 New Contributors 👏🏽 We'd also like to thank our two new contributors with this release!
@emekaokoli19 made their first contribution in #486!
@tthoraldson made their first contribution in #411!
Recap — what is Guardrails AI?
Guardrails AI allows you to define and enforce assurance for AI applications from structuring output to quality controls. Guardrails AI does this by creating a firewall-like bounding box around the LLM application (a Guard) that contains a set of validators. A Guard can include validators from our library or a custom validator that could enforce what your application is intended to do.
Streaming
As developers begin making usability improvements to LLM applications, streaming becomes an important tool in the toolbox. Guardrails now supports streaming for both unstructured text and structured JSON!
When streaming is enabled, the received chunks are concatenated to form few valid fragments that are validated one by one. As soon as each fragment is validated, it's streamed to the user. In order to form fragments, for JSON outputs, we check each chunk whether it's a valid JSON and if it's not, we either wait for more chunks or parse it accordingly. Once, we have a somewhat valid fragment, we perform sub-schema validation between the fragment and the expected output schema.

See the Streaming documentation for more details.
Input Validation
Input Validation is one of the most requested features to date! Guardrails now supports validating inputs (prompts, instructions, msg_history) with string validators.
See the InputValidation documentation for more details.
Anthropic and Hugging Face models support
We've heard your requests for better support of other models and are happy to share that we now support Anthropic and Hugging Face models!
See the docs on LLM API wrappers for more details.
New Validators
OnTopic Validator
We've released the OnTopic validator your LLM application on topic - one of the most requested validators to date! The OnTopic validator accepts at least one valid topic and an optional list of invalid topics. The default behavior first runs a Zero-Shot model, and then falls back to ask OpenAI's gpt-3.5-turbo if the Zero-Shot model is not confident in the topic classification (score < 0.5). In our experiments this LLM fallback increases accuracy by 15% but also increases latency (more than doubles the latency in the worst case). Both the Zero-Shot classification and the GPT classification may be toggled.
See the documentation on the OnTopic validator for more details.
CompetitorCheck validator
This validator checks LLM output to flag sentences naming one of your competitors and removes those sentences from the final output. When setting on-fail to 'fix' this validator will remove the flagged sentences from the output. You need to provide an extensive list of your competitors' names including all common variations (e.g. JP Morgan, JP Morgan Chase, etc.) the compilation of this list will have an impact on the ultimate outcome of the validation.
See the documentation on CompetitorCheck validator for more details!
ToxicLanguage validator
This validator checks whether an LLM-generated response contains toxic language. It uses the pre-trained multi-label model from HuggingFace -unitary/unbiased-toxic-roberta to check whether the generated text is toxic. It supports both full-text-level and sentence-level validation.
See the documentation on ToxicLanguage validator for more details!
Validation Outcome
Previous when calling __call__ or parse on a Guard, the Guard would return a tuple of the raw LLM output and the validated output or just the validated output respecitvely.
Now, in order to communicate more information, we respond with a ValidationOutcome class that contains the above information and more.
See ValidationOutcome in the API Reference for more information on these additional properties.
History & Logs Improvements
If you're familiar with Guardrails, then you might have used the Guard.state property to inspect how the Guard process behaved over time. In order to make the Guard process more transparent, as part of v0.3.0 we redesigned how you access this information.
Now, on a Guard, you can access logs related to any __call__ or parse call within the current session via Guard.history.
See the Logs documentation or 0.3.x migration guide for more details.
Breaking changes
Refactoring response object from Guardrails. We now use ValidationOutcome.
Refactoring access to logs, see the new structure here.
Other changes
Shiny new docs! Restructuring documentation with more Pydantic examples, etc.
Migration guide
For more details on how to migrate to 0.3.0 please see our migration guide.
Take it for a spin!
You can install the latest version of Guardrails with:
pip install guardrails-ai
There are a number of ways to engage with us:
Join the discord: https://discord.gg/kVZEnR4WQK
Star us on Github: https://github.com/guardrails-ai/guardrails
Follow us on Twitter: https://twitter.com/guardrails_ai
Follow us on LinkedIn: https://www.linkedin.com/company/guardrailsai/
We're always looking for contributions from the open source community. Check out guardrails/issues for a list of good starter issues.


