Sft
Supervised Fine-Tuning for Instruction Following
Gemma is a groundbreaking new open model in the Gemini family of models from Google. Gemma is just as powerful as previous models but compact enough to run locally on NVIDIA RTX GPUs. Gemma is available in 2 sizes: 2B and 7B parameters. With NVIDIA NeMo, you can customize Gemma to fit your usecase and deploy an optimized model on your NVIDIA GPU.
In this tutorial, we'll go over a specific kind of customization -- full parameter supervised fine-tuning for instruction following (also known as SFT). To learn how to perform Low-rank adapter (LoRA) tuning to follow a specific output format, see the companion notebook. For LoRA, we'll show how you can kick off a multi-GPU training job with an example script so that you can train on 8 GPUs. The exact number of GPUs needed will depend on which model you use and what kind of GPUs you use, but we recommend using 8 A100-80GB GPUs.
We'll also learn how to export your custom model to TensorRT-LLM, an open-source library that accelerates and optimizes inference performance of the latest LLMs on the NVIDIA AI platform.
Introduction
Supervised Fine-Tuning (SFT) is the process of fine-tuning all of a model’s parameters on supervised data of inputs and outputs. It teaches the model how to follow user specified instructions and is typically done after model pre-training. This notebook describes the steps involved in fine-tuning Gemma for instruction following. Gemma was released with a checkpoint already fine-tuned for instruction-following, but here we'll learn how we can tune our own model starting with the pre-trained checkpoint to achieve a similar outcome.
Download the base model
For all of our customization and deployment processes, we'll need to start off with a pre-trained version of Gemma in the .nemo format. You can download the base model in .nemo format from the NVIDIA GPU Cloud, or convert checkpoints from another framework into a .nemo file. You can choose to use the 2B parameter or 7B parameter Gemma models for this notebook -- the 2B model will be faster to customize, but the 7B model will be more capable.
You can download either model from the NVIDIA NGC Catalog, using the NGC CLI. The instructions to install and configure the NGC CLI can be found here.
To download the model, execute one of the following commands, based on which model you want to use:
ngc registry model download-version "nvidia/nemo/gemma_2b_base:1.1"
or
ngc registry model download-version "nvidia/nemo/gemma_7b_base:1.1"
Getting NeMo Framework
NVIDIA NeMo Framework is a generative AI framework built for researchers and PyTorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). The primary objective of NeMo is to provide a scalable framework for researchers and developers from industry and academia to more easily implement and design new generative AI models by being able to leverage existing code and pretrained models.
If you haven't already, you can pull a container that includes the version of NeMo Framework and all dependencies needed for this notebook with the following:
docker pull nvcr.io/nvidia/nemo:24.01.gemma
The best way to run this notebook is from within the container. You can do that by launching the container with the following command
docker run -it --rm --gpus all --ipc=host --network host -v $(pwd):/workspace nvcr.io/nvidia/nemo:24.01.gemma
Then, from within the container, start the jupyter server with
jupyter lab --no-browser --port=8080 --allow-root --ip 0.0.0.0
SFT Data Formatting
To begin, we'll need to prepare a dataset to tune our model on.
This notebook uses the Dolly dataset as an example to demonstrate how to format your SFT data. This dataset consists of 15,000 instruction-context-response triples.
First, to download the data enter the following command:
The downloaded data, stored at databricks-dolly-15k.jsonl, is a JSONL file with each line formatted like this:
As this example shows, there are no clear “input” and “output” fields, which are required for SFT with NeMo. To remedy this, we can do some data pre-processing. This cell converts the instruction, context, and response fields into input and output. It also concatenates the instruction and context fields with a \n\n separator, and randomizes the order in which they appear in the input to generate a new JSONL file. This generates an output file called databricks-dolly-15k-output.jsonl.
Now, the dataset is a JSONL file with each line formatted like this:
SFT Training
To perform the SFT Training, we'll use NVIDIA NeMo-Aligner. NeMo-Aligner is a scalable toolkit for efficient model alignment, built using the NeMo Toolkit which allows for scaling training up to 1000s of GPUs using tensor, data and pipeline parallelism for all components of alignment. Users can do end-to-end model alignment on a wide range of model sizes and take advantage of all the parallelism techniques to ensure their model alignment is done in a performant and resource efficient manner.
To install NeMo Aligner, we can clone the repository and install it using pip:
If you want to track and visualize your SFT training experiments, you can login to Weights and Biases. If you don't want to use wandb, make sure to set the argument exp_manager.create_wandb_logger=False when launching your job.
To run SFT locally on a single node, you can use the following command. Note the trainer.num_nodes and trainer.devices arguments, which define how many nodes and how many total GPUs you want to use for training. Make sure the source model, output model, and dataset paths all match your local setup.
If you'd like to perform multi-node finetuning -- for example on a slurm cluster -- you can find more information in the NeMo-Aligner user guide.
When training is finished, you should see a file called results/checkpoints/gemma_dolly_finetuned.nemo that contains the weights of your new, instruction-tuned model.
Exporting to TensorRT-LLM
TensorRT-LLM is an open-source library for optimizing inference performance to achieve state-of-the-art speed on NVDIA GPUs. The NeMo framework offers an easy way to compile .nemo models into optimized TensorRT-LLM engines which you can run locally embedded in another application, or serve to other applications using a server like Triton Inference Server.
To start with, lets create a folder where our exported model will land
To export the model, we just need to create an instance of the TensorRTLLM class and call the TensorRTLLM.export() function -- pointing the nemo_checkpoint_path argument to the newly fine-tuned model we trained above.
This creates a couple of files in the folder we created -- an engine file that holds the weights and the compiled execution graph of the model, a tokenizer.model file which holds the tokenizer information, and config.json which holds some metadata about the model (along with model.cache, which caches some operations and makes it faster to re-compile the model in the future.)
With the model exported into TensorRTLLM, we can perform very fast inference
There's also a convenient function to deploy a the model as a service, backed by Triton Inference Server: