Tensorflow BYOM Iris
TensorFlow BYOM: Train locally and deploy on SageMaker.
This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Thie notebook was last tested on a ml.m5.xlarge instance running the Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized) kernel in SageMaker Studio.
Introduction
We will do a classification task, training locally in the box from where this notebook is being run. We then set up a real-time hosted endpoint in SageMaker.
Consider the following model definition for IRIS classification. This mode uses the tensorflow.estimator.DNNClassifier which is a pre-defined estimator module for its model definition.
Prequisites and Preprocessing
Permissions and environment variables
Here we set up the linkage and authentication to AWS services. In this notebook we only need the roles used to give learning and hosting access to your data. The Sagemaker SDK will use S3 defualt buckets when needed. If the get_execution_role does not return a role with the appropriate permissions, you'll need to specify an IAM role arn that does.
Model Definitions
For this example, we'll use a very simple network architecture, with three densely-connected layers.
Data Setup
We'll use the pre-processed iris training and test data stored in a public S3 bucket for this example.
Training the Network Locally
Here, we train the network using the Tensorflow .fit method, just like if we were using our local computers. This should only take a few seconds because the model is so simple.
Set up hosting for the model
Export the model from tensorflow
In order to set up hosting, we have to import the model from training to hosting. We will begin by exporting the model from TensorFlow and saving it to our file system. We also need to convert the model into a form that is readable by sagemaker.tensorflow.model.TensorFlowModel. There is a small difference between a SageMaker model and a TensorFlow model. The conversion is easy and fairly trivial. Simply move the tensorflow exported model into a directory export\Servo\ and tar the entire directory. SageMaker will recognize this as a loadable TensorFlow model.
Open a new sagemaker session and upload the model on to the default S3 bucket. We can use the sagemaker.Session.upload_data method to do this. We need the location of where we exported the model from TensorFlow and where in our default bucket we want to store the model(/model). The default S3 bucket can be found using the sagemaker.Session.default_bucket method.
Here, we upload the model to S3
Import model into SageMaker
Use the sagemaker.tensorflow.model.TensorFlowModel to import the model into SageMaker that can be deployed. We need the location of the S3 bucket where we have the model and the role for authentication.
Create endpoint
Now the model is ready to be deployed at a SageMaker endpoint. We can use the sagemaker.tensorflow.model.TensorFlowModel.deploy method to do this. Unless you have created or prefer other instances, we recommend using a single 'ml.m5.2xlarge' instance for this example. These are supplied as arguments.
Validate the endpoint for use
We can now use this endpoint to classify an example to ensure that it works. The output from predict will be an array of probabilities for each of the 3 classes.
Delete all temporary directories so that we are not affecting the next run. Also, optionally delete the end points.
If you do not want to continue using the endpoint, you can remove it. Remember, open endpoints are charged. If this is a simple test or practice, it is recommended to delete them.
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.