Prerequisites
This document assumes you have set up Guardrails AI. You should also be familiar with foundational Guardrails AI concepts, such as guards and validators. For more information, see Quickstart: In-Application. You should be familiar with the basic concepts of RAG. For the basics, see our blog post on reducing hallucination issues in GenAI apps. This walkthrough downloads files from Guardrails Hub, our public directory of free validators. If you haven’t already, create a Guardrails Hub API key and runguardrails configure to set it. For more information on Guardrails Hub, see the Guardrails Hub documentation.
Unless you specify another LLM, LlamaIndex uses OpenAI for natural language queries as well as to generate vector embeddings. This requires generating and setting an OpenAI API key, which you can do on Linux using:
Install LlamaIndex
Install the LlamaIndex package:Set up your data
Next, we’ll need some sample data to feed into a vector database. Download the essay located here using curl on the command line (or read it with your browser and save it):Validate LlamaIndex calls
Next, call LlamaIndex without any guards to see what values it returns if you don’t validate the output.guardrails.integrations.llama_index.GuardrailsQueryEngine class, which is a thin wrapper around the LlamaIndex query engine. The response will look something like this: