Using Environments
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License
![]()
Using environments
Contents
- Introduction
- Setup
- Use curated environment
- Create environment
- Add Python packages
- Specify environment variables
- Submit run using environment
- Register environment
- List and get existing environments
- Other ways to create environments
- From existing Conda environment
- From Conda or pip files
- Using environments for inferencing
- Docker settings
- Spark and Azure Databricks settings
- Next steps
Introduction
Azure ML environments are an encapsulation of the environment where your machine learning training happens. They define Python packages, environment variables, Docker settings and other attributes in declarative fashion. Environments are versioned: you can update them and retrieve old versions to revisit and review your work.
Environments allow you to:
- Encapsulate dependencies of your training process, such as Python packages and their versions.
- Reproduce the Python environment on your local computer in a remote run on VM or ML Compute cluster
- Reproduce your experimentation environment in production setting.
- Revisit and audit the environment in which an existing model was trained.
Environment, compute target and training script together form run configuration: the full specification of training run.
Setup
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration notebook first if you haven't.
First, let's validate Azure ML SDK version and connect to workspace.
Use curated environments
Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They contain collections of Python packages and settings to help you get started different machine learning frameworks.
- The AzureML-Minimal environment contains a minimal set of packages to enable run tracking and asset uploading. You can use it as a starting point for your own environment.
- The AzureML-Tutorial environment contains common data science packages, such as Scikit-Learn, Pandas and Matplotlib, and larger set of azureml-sdk packages.
Curated environments are backed by cached Docker images, reducing the run preparation cost.
You can get a curated environment using
To list curated environments, use following code.
Note: The name prefixes AzureML and Microsoft are reserved for curated environments. Do not use them for your own environments
Create your own environment
You can create an environment by instantiating Environment object and then setting its attributes: set of Python packages, environment variables and others.
Add Python packages
The recommended way is to specify Conda packages, as they typically come with complete set of pre-built binaries.
You can also add pip packages, and specify the version of package
Submit run using environment
When you submit a run, you can specify which environment to use.
On the first run in given environment, Azure ML spends some time building the environment. On the subsequent runs, Azure ML keeps track of changes and uses the existing environment, resulting in faster run completion.
To submit a run, create a run configuration that combines the script file and environment, and pass it to Experiment.submit. In this example, the script is submitted to local computer, but you can specify other compute targets such as remote clusters as well.
To audit the environment used by for a run, you can use get_environment.
Register environment
You can manage environments by registering them. This allows you to track their versions, and reuse them in future runs. For example, once you've constructed an environment that meets your requirements, you can register it and use it in other experiments so as to standardize your workflow.
If you register the environment with same name, the version number is increased by one. Note that Azure ML keeps track of differences between the version, so if you re-register an identical version, the version number is not increased.
List and get existing environments
Your workspace contains a dictionary of registered environments. You can then use Environment.get to retrieve a specific environment with specific version.
Other ways to create environments
From existing Conda environment
You can create an environment from existing conda environment. This make it easy to reuse your local interactive environment in Azure ML remote runs. For example, if you've created conda environment using
conda create -n mycondaenv
you can create Azure ML environment out of that conda environment using
myenv = Environment.from_existing_conda_environment(name="myenv",conda_environment_name="mycondaenv")
From conda or pip files
You can create environments from conda specification or pip requirements files using
myenv = Environment.from_conda_specification(name="myenv", file_path="path-to-conda-specification-file")
myenv = Environment.from_pip_requirements(name="myenv", file_path="path-to-pip-requirements-file")
Using environments for inferencing
You can re-use the training environment when you deploy your model as a web service, by specifying inferencing stack version, and adding then environment to InferenceConfig.
from azureml.core.model import InferenceConfig
myenv.inferencing_stack_version = "latest"
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
See Register Model and deploy as Webservice Notebook for an end-to-end example of web service deployment.
Docker settings
Docker container provides an efficient way to encapsulate the dependencies. When you enable Docker, Azure ML builds a Docker image and creates a Python environment within that container, given your specifications. The Docker images are reused: the first run in a new environment typically takes longer as the image is build.
Note: For runs on local computer or attached virtual machine, that computer must have Docker installed and enabled. Machine Learning Compute has Docker pre-installed.
Attribute docker.enabled controls whether to use Docker container or host OS for execution.
You can specify custom Docker base image and registry. This allows you to customize and control in detail the guest OS in which your training run executes. whether to use GPU, whether to use shared volumes, and shm size.
You can also specify shared volumes, and shm size.
Spark and Azure Databricks settings
In addition to Python and Docker settings, Environment also contains attributes for Spark and Azure Databricks runs. These attributes become relevant when you submit runs on those compute targets.
Next steps
Train with ML frameworks on Azure ML:
Learn more about registering and deploying a model: