Dolly 12b Deepspeed Sagemaker
Deploying Dolly-12B on SageMaker using DeepSpeed Large Model Container DLC
In this notebook, we explore how to host a large language model on SageMaker using the Large Model Inference container that is optimized for hosting large models using DJLServing. DJLServing is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. To learn more about DJL and DJLServing, you can refer to our recent blog post.
In this notebook, we deploy the dolly-v2-12b.
This notebook was tested on a ml.t3.medium instance using the Python 3 (Data Science) kernel on SageMaker Studio.
License information
Please view the license information of using this model here
Create a SageMaker Model for Deployment
As a first step, we'll import the relevant libraries and configure several global variables such as the hosting image that will be used nd the S3 location of our model artifacts
Deploying a Large Language Model using Hugging Face Accelerate
The DJL Inference Image which we will be utilizing ships with a number of built-in inference handlers for a wide variety of tasks including:
text-generationquestion-answeringtext-classificationtoken-classification
You can refer to this GitRepo for a list of additional handlers and available NLP Tasks.
These handlers can be utilized as is without having to write any custom inference code. We simply need to create a serving.properties text file with our desired hosting options and package it up into a tar.gz artifact.
Lets take a look at the serving.properties file that we'll be using for our first example
There are a few options specified here. Lets go through them in turn
engine- specifies the engine that will be used for this workload. In this case we'll be hosting a model using the DJL Python Engineoption.entryPoint- specifies the entrypoint code that will be used to host the model. djl_python.huggingface refers to thehuggingface.pymodule from djl_python repo.option.s3url- specifies the location of the model files. Alternativelly anoption.model_idoption can be used instead to specifiy a model from Hugging Face Hub (e.g.EleutherAI/gpt-j-6B) and the model will be automatically downloaded from the Hub. The s3url approach is recommended as it allows you to host the model artifact within your own environment and enables faster deployments by utilizing optimized approach within the DJL inference container to transfer the model from S3 into the hosting instanceoption.task- This is specific to thehuggingface.pyinference handler and specifies for which task this model will be usedoption.device_map- Enables layer-wise model partitioning through Hugging Face Accelerate. Withoption.device_map=auto, Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don’t have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.option.load_in_8bit- Quantizes the model weights to int8 thereby greatly reducing the memory footprint of the model from the initial FP32. See this blog post from Hugging Face for additional information
For more information on the available options, please refer to the SageMaker Large Model Inference Documentation
Notice that the engine parameter is now set to DeepSpeed and the option.entryPoint has been modified to use the deepspeed.py module. Python scripts that use DeepSpeed can not be launched as traditional python scripts (i.e. python deepspeed.py would not work.) Setting engine=DeepSpeed will automatically configure the environment and launch the inference script appropriatelly. The only other new parameter here is option.tensor_parallel_degree where we have to specify the number of GPU devices to which the model will be sharded.
Unlike Accelerate where the model was partitioned along the layers, DeepSpeed uses TensorParallelism where individual layers (Tensors) are sharded accross devices. For example each GPU can have a slice of each layer.
Where with the layer-wise approach, the data flows through each GPU device sequeantially, here data is sent to all GPU devices where a partial result is compute on each GPU. The partial results are then collected though an All-Gather operation to compute the final result. TensorParallelism generally provides higher GPU utilization and better performance.
We place the serving.properties file into a tarball and upload it to S3
Deploy Model to a SageMaker Endpoint
With a helper function we can now deploy our endpoint and invoke it with some sample inputs
Let's run an example with a basic text generation prompt Large model inference is
Now let's try the model