Notebooks
N
NVIDIA
Using Nims With Guardrails

Using Nims With Guardrails

gpu-accelerationretrieval-augmented-generationllm-inferencetensorrtnvidia-generative-ai-exampleslarge-language-modelsmicroservicetriton-inference-serverLLMnotebooksUsing_NVIDIA_NIMs_with_NeMo_Guardrailsragnemolangchain

Securing Generative AI Deployments with NVIDIA NIM Microservices and NVIDIA NeMo Guardrails

Integrating NVIDIA NIMs with NeMo Guardrails

This tutorial contains all of the code snippets presented in the technical blog Securing Generative AI Deployments with NVIDIA NIM and NVIDIA NeMo Guardrails in a complete notebook. Please feel free to read the blog for full context.

As a reference for how to deploy NIM on your chosen infrastructure, check out this simple guide to deploying a NIM container and testing an inference request.

In this tutorial, we deploy two NIM microservices — a NeMo Retriever Embedding NIM and an LLM NIM. We then integrate both with NeMo Guardrails to prevent malicious use in the form of user account hacking attempted through queries that pertain to personal data.

For the LLM NIM, we use Meta’s new Llama-3.1-70B-Instruct model. For the embedding NIM, we use NVIDIA’s new EmbedQA-E5-V5. The NeMo Retriever Embedding NIM assists the guardrails by converting each input query into an embedding vector. This enables efficient comparison with guardrails policies, ensuring that the query does not match with any prohibited or out-of-scope policies, thereby preventing the LLM NIM from giving unauthorized outputs.

By integrating these NIM with NeMo Guardrails, we accelerate the performance of safety filtering and dialog management.

We will cover:

  • Defining the use case
  • Setting up a guardrailing system with NIM
  • Testing the integration

Defining the use case

In this example, we demonstrate how to intercept any incoming user questions that pertain to personal data using topical rails. These rails ensure the LLM response adheres to topics which do not share any sensitive information. They also help to keep the LLM outputs on track by fact-checking before answering the user's questions. The integration pattern of these rails with the NIMs can be seen in the figure below:

An architectural diagram showing how Guardrails runtime works with the application code and the NIMs

Setting up a guardrailing system with NIM

Before we begin, let’s make sure that our NeMo Guardrails library is up to date with the latest version. The version that would work with this tutorial is 0.9.1.1 or later.

We can check the version of the NeMo Guardrails library by running the following command in the terminal:

[ ]

If you do not have NeMo Guardrails installed, run the following command:

[ ]

If you have versions that are older than 0.9.1.1, upgrade to the latest version by running the following command:

[ ]

The next step is defining the configuration of the guardrails. To learn more, see the configuration guide. We start by creating the config directory as follows:

├── config
│   ├── config.yml
│   ├── flows.co
[ ]

In the config.yml file, we configure the NIM as follows:

  • if you'd like to use NVIDIA-hosted NIMs, comment out the parameters and base_url lines
  • if you'd like to use your self-hosted NIMs, replace <BASE_URL_LLM_NIM> and <BASE_URL_EMBEDDING_NIM> with your own base URL for the NIMs
[ ]

If you are testing with NVIDIA-hosted NIMs, remove parameters and base_url from the config.yml file.

Check if the config.yml has the correct content

[ ]

Once the configuration is added with the respective NIM, we can add rails in the flows.co file. An example of adding dialog rails is as follows:

define user greet
  "Hello"

define bot greet
  "Hello. I'm the Guardrails Bot. How can I help you?"

define flow
  user greet
  bot greet

The script above shows a simple dialog rail that explicitly greets the user in a specific manner.

We can add more dialog rails in the flows.co file as follows:

define user ask about user sensitive data
  "Can you hack into someone's email account?"
  "How do I get my friend's photos without permission?"

define bot refuse to respond about user sensitive data
  "Apologies, but the Guardrails Bot can't help with actions that asks about user sensitive data. It's important to respect privacy."

define flow
  user ask about user sensitive data
  bot refuse to respond about user sensitive data
[ ]

Check if the flows.co has the correct content

[ ]

With the Colang and YAML files in the config folder, we should be ready to set up our guardrails.

We can import the related libraries and import the config folder to instantiate our guardrails.

[ ]
[ ]

We are ready to test out our guardrails.

Testing the integration

First, we greet our LLM NIM through our guardrails and see if the guardrails pick up one of the predefined dialog rails.

[ ]

Here, our query to the LLM NIM is intercepted by the guardrails that we have set up because our query matches with one of the predefined dialog rails. The NeMo Retriever Embedding NIM assists our guardrails in turning our query into an embedding vector. Our guardrails then perform a semantic search to return the most similar results of the utterances that we provide as part of flows.co.

Next, we ask the LLM NIM to provide us with a way to hack into a phone. This query falls into the category of topics pertaining to personal data. This is expected to be blocked by the guardrails based on the configuration.

[ ]

As seen, our guardrails are able to intercept the message and block the LLM NIM from responding to the query since we have defined dialog rails to prevent further discussion of this topic.

The tutorial above is for users to only get started with a simple use case. To create a more robust guardrailing system, users are encouraged to set up various types of rails allowing for further customization of their use cases.

Conclusion

In this post, we detailed the steps for integrating NVIDIA NIMs with NeMo Guardrails. In this instance, we were able to stop our application from responding to questions pertaining to personal data. With the integration of NVIDIA NIMs and NeMo Guardrails, developers are able to deploy AI models to production quickly and safely.