Notebooks
A
Amazon Web Services
Meta Llama 2 70b Lmi

Meta Llama 2 70b Lmi

data-scienceinferencearchivedamazon-sagemaker-examplesreinforcement-learningmachine-learningWorkshopsawsexamplesdeep-learninglab11-llama2sagemakerjupyter-notebooktrainingmlops

Deploy LLama2 70b Model with high performance on SageMaker using Sagemaker LMI and Rolling batch

In this notebook, we explore how to host a LLama2 large language model with FP16 precision on SageMaker using the DeepSpeed. We use DJLServing as the model serving solution in this example that is bundled in the LMI container. DJLServing is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. To learn more about DJL and DJLServing, you can refer to our recent blog post (https://aws.amazon.com/blogs/machine-learning/deploy-bloom-176b-and-opt-30b-on-amazon-sagemaker-with-large-model-inference-deep-learning-containers-and-deepspeed/).

Model parallelism can help deploy large models that would normally be too large for a single GPU. With model parallelism, we partition and distribute a model across multiple GPUs. Each GPU holds a different part of the model, resolving the memory capacity issue for the largest deep learning models with billions of parameters.

SageMaker has rolled out DeepSpeed container which now provides users with the ability to leverage the managed serving capabilities and help to provide the un-differentiated heavy lifting.

In this notebook, we deploy https://huggingface.co/TheBloke/Llama-2-70b-fp16 model on a ml.g5.48xlarge instance.

Licence agreement

[ ]
[ ]
[ ]
[ ]

[OPTIONAL] Download the model from Hugging Face and upload the model artifacts on Amazon S3

If you intend to download your copy of the model and upload it to a s3 location in your AWS account, please follow the below steps, else you can skip to the next step.

[ ]
[ ]
[ ]

Define a variable to contain the s3url of the location that has the model

[ ]

Create SageMaker compatible Model artifact, upload Model to S3 and bring your own inference script.

SageMaker Large Model Inference containers can be used to host models without providing your own inference code. This is extremely useful when there is no custom pre-processing of the input data or postprocessing of the model's predictions.

SageMaker needs the model artifacts to be in a Tarball format. In this example, we provide the following files - serving.properties.

The tarball is in the following format:

code
├──── 
│   └── serving.properties
serving.properties is the configuration file that can be used to configure the model server.

Create serving.properties

This is a configuration file to indicate to DJL Serving which model parallelization and inference optimization libraries you would like to use. Depending on your need, you can set the appropriate configuration.

Here is a list of settings that we use in this configuration file -

engine: The engine for DJL to use. In this case, we have set it to MPI.
option.model_id: The model id of a pretrained model hosted inside a model repository on huggingface.co (https://huggingface.co/models) or S3 path to the model artefacts. 
option.tensor_parallel_degree: Set to the number of GPU devices over which Accelerate needs to partition the model. This parameter also controls the no of workers per model which will be started up when DJL serving runs. As an example if we have a 4 GPU machine and we are creating 4 partitions then we will have 1 worker per model to serve the requests.

For more details on the configuration options and an exhaustive list, you can refer the documentation - https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-large-model-configuration.html.

[ ]
[ ]
[ ]

Image URI for the DJL container is being used here

[ ]

Create the Tarball and then upload to S3 location

[ ]
[ ]

To create the end point the steps are:

  1. Create the Model using the Image container and the Model Tarball uploaded earlier

  2. Create the endpoint config using the following key parameters

    a) Instance Type is ml.g5.48xlarge

    b) ContainerStartupHealthCheckTimeoutInSeconds is 3600 to ensure health check starts after the model is ready

  3. Create the end point using the endpoint config created

Create the Model

Use the image URI for the DJL container and the s3 location to which the tarball was uploaded.

The container downloads the model into the /tmp space on the instance because SageMaker maps the /tmp to the Amazon Elastic Block Store (Amazon EBS) volume that is mounted when we specify the endpoint creation parameter VolumeSizeInGB. It leverages s5cmd(https://github.com/peak/s5cmd) which offers a very fast download speed and hence extremely useful when downloading large models.

For instances like p4dn, which come pre-built with the volume instance, we can continue to leverage the /tmp on the container. The size of this mount is large enough to hold the model.

[ ]
[ ]
[ ]

This step can take ~ 20 min or longer so please be patient

[ ]

While you wait for the endpoint to be created, you can read more about:

Leverage the Boto3 to invoke the endpoint.

This is a generative model so we pass in a Text as a prompt and Model will complete the sentence and return the results.

You can pass a prompt as input to the model. This done by setting inputs to a prompt. The model then returns a result for each prompt. The text generation can be configured using appropriate parameters. These parameters need to be passed to the endpoint as a dictionary of kwargs. Refer this documentation - https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig for more details.

The below code sample illustrates the invocation of the endpoint using a text prompt and also sets some parameters

[ ]

Clean Up

[ ]
[ ]