Multi Turn Conversation
๐จ NeMo Data Designer: Synthetic Conversational Data with Person Details
๐ What you'll learn
-
This notebook demonstrates how to use the NeMo Data Designer to build a synthetic data generation pipeline step-by-step.
-
We will create multi-turn user-assistant dialogues tailored for fine-tuning language models, enhanced with realistic person details.
-
These datasets could be used for developing and enhancing conversational AI applications, including customer
support chatbots, virtual assistants, and interactive learning systems.
๐ IMPORTANT โย Environment Setup
If you haven't already, follow the instructions in the README to install the necessary dependencies.
You may need to restart your notebook's kernel after setting up the environment.
In this notebook, we assume you have a self-hosted instance of Data Designer up and running.
For deployment instructions, see the Installation Options section of the NeMo Data Designer documentation.
๐ฆ Import the essentials
-
The
data_designermodule ofnemo_microservicesexposes Data Designer's high-level SDK. -
The
essentialsmodule provides quick access to the most commonly used objects.
โ๏ธ Initialize the NeMo Data Designer Client
NeMoDataDesignerClientis responsible for submitting generation requests to the microservice.
๐๏ธ Define model configurations
-
Each
ModelConfigdefines a model that can be used during the generation process. -
The "model alias" is used to reference the model in the Data Designer config (as we will see below).
-
The "model provider" is the external service that hosts the model (see the model config docs for more details).
-
By default, the microservice uses build.nvidia.com as the model provider.
๐๏ธ Initialize the Data Designer Config Builder
-
The Data Designer config defines the dataset schema and generation process.
-
The config builder provides an intuitive interface for building this configuration.
-
The list of model configs is provided to the builder at initialization.
Define Pydantic Models for Structured Outputs
You can use Pydantic to define a structure for the messages that are produced by Data Designer
๐ฒ Adding Sampler Columns
-
Sampler columns offer non-LLM based generation of synthetic data.
-
They are particularly useful for steering the diversity of the generated data, as we demonstrate below.
๐ฆ Adding LLM Generated columns
Now define the columns that the model will generate. These prompts instruct the LLM to produce the actual conversation:
- a system prompt to guide how the AI assistant engages in the conversation with the user,
- the conversation, and
- finally, we generate a toxicity_label to assess user toxicity over the entire conversation.
๐ฌ๐ค AI Assistant system prompt and conversation
We generate a system prompt to base the AI assistant and then generate the entire conversation.
๐ LLM-as-a-Judge: Toxicity Assessment
When generating our synthetic dataset, we need to determine the quality of the generated dialogs.
We use the LLM-as-a-Judge strategy to do this.
To do so, we need to define the rubric that the LLM should use to assess generation quality along with a prompt that provides relavant instructions.
๐ Iteration is key โย preview the dataset!
-
Use the
previewmethod to generate a sample of records quickly. -
Inspect the results for quality and format issues.
-
Adjust column configurations, prompts, or parameters as needed.
-
Re-run the preview until satisfied.
๐ Analyze the generated data
-
Data Designer automatically generates a basic statistical analysis of the generated data.
-
This analysis is available via the
analysisproperty of generation result objects.
๐ Scale up!
-
Happy with your preview data?
-
Use the
createmethod to submit larger Data Designer generation jobs.