Skip to main content

secrets_detection

guardrails hub install hub://guardrails/secrets_present --quiet
    Installing hub://guardrails/secrets_present...
✅Successfully installed guardrails/secrets_present!


Check whether an LLM-generated code response contains secrets

Using the SecretsPresent validator

This is a simple walkthrough of how to use the DetectSecrets validator to check whether an LLM-generated code response contains secrets. It utilizes the detect-secrets library, which is a Python library that scans code files for secrets. The library is available on GitHub at this link.

# Install the necessary packages
pip install detect-secrets -q
# Import the guardrails package
# and import the SecretsPresent validator
# from Guardrails Hub
import guardrails as gd
from guardrails.hub import SecretsPresent
from rich import print
    /Users/dtam/.pyenv/versions/3.12.3/envs/litellm/lib/python3.12/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:13: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm, trange
# Create a Guard object with this validator
# Here, we'll specify that we want to fix
# if the validator detects secrets

guard = gd.Guard.for_string(
validators=[SecretsPresent(on_fail="fix")],
description="testmeout",
)
# Let's run the validator on a dummy code snippet
# that contains few secrets
code_snippet = """
import os
import openai

SECRET_TOKEN = "DUMMY_SECRET_TOKEN_abcdefgh"

ADMIN_CREDENTIALS = {"username": "admin", "password": "dummy_admin_password"}


openai.api_key = "sk-blT3BlbkFJo8bdtYwDLuZT"
COHERE_API_KEY = "qdCUhtsCtnixTRfdrG"
"""

# Parse the code snippet
output = guard.parse(
llm_output=code_snippet,
)

# Print the output
print(output.validated_output)
/Users/dtam/dev/guardrails/guardrails/validator_service/__init__.py:85: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
warnings.warn(





import os
import openai

SECRET_TOKEN = "********"

ADMIN_CREDENTIALS = {"username": "admin", "password": "********"}


openai.api_key = "********"
COHERE_API_KEY = "********"

As you can see here, our validator detected the secrets within the provided code snippet. The detected secrets were then masked with asterisks.

# Let's run the validator on a dummy code snippet
# that does not contain any secrets
code_snippet = """
import os
import openai

companies = ["google", "facebook", "amazon", "microsoft", "apple"]
for company in companies:
print(company)
"""

# Parse the code snippet
output = guard.parse(
llm_output=code_snippet,
)

# Print the output
print(output.validated_output)
WARNING:py.warnings:/Users/dtam/dev/guardrails/guardrails/validator_service/__init__.py:85: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
warnings.warn(






import os
import openai

companies = ["google", "facebook", "amazon", "microsoft", "apple"]
for company in companies:
print(company)

As you can see here, the provided code snippet does not contain any secrets and the validator here also did not have any false positives!

In this way, you can use the SecretsPresent validator to check whether an LLM-generated code response contains secrets. With Guardrails as wrapper, you can be assured that the secrets in the code will be detected and masked and not be exposed.