Edit on more complex setups (like your own package)
Create your own docker file, e.g.:
dockerfile = r"""
FROM nvcr.io/nvidia/pytorch:22.02-py3
RUN python3 -m pip install --upgrade pip setuptools wheel
RUN pip install numpy .....
COPY requirements.azure.txt .
RUN pip install -r requirements.azure.txt
....
"""
myEnv = Environment(name = "your-env-name")
myEnv.docker.base_image = None
myEnv.docker.base_dockerfile = dockerfile
myEnv.python.user_managed_dependencies = True
myEnv.docker.arguments= ['--privileged']
myEnv.register(workspace=ws)
See below on how to use this myEnv
.
I assume you know about ScriptRunConfig and similar (otherwise see my comment)
Then the way I typically set up my environment is:
myenv = Environment.from_pip_requirements(
name="your-env-name",
file_path="requirements.azure.txt"
)
# Specify a (GPU) base image
myenv.docker.enabled = True
myenv.docker.base_image = (
"mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04"
)
# Either go with RunConfiguration (more general)
train_run_config = RunConfiguration()
train_run_config.environment = myenv
....
# OR use the simpler ScriptRunConfig:
run_config = ScriptRunConfig(
source_directory=".",
command=launch_cmd,
compute_target=compute,
environment=myenv,
)
The docker image serves as a base, you can build your own or pick from azures defaults.
The crucial part is the from_pip_requirements
I typically store my requirements in a separate requirements.azure.txt
since my local install e.g. might not have a GPU, etc
Here you can also use pip to install prepackaged wheels of your local install. Install local wheel file with requirements.txt
I really hope this makes it clear now :) Otherwise feel free to leave some comment.