0

I am having an issue deploying a model using a custom docker image. The deployment is failing because the packages needed are not in the “default” environment and I need to specify a custom one. ("/miniconda/envs/py37/bin/python")

I am using the same image to run the training, and with the estimator class I do have a way to specify the environment. Is it possible to something similar with the InferenceConfig?

    est = Estimator(source_directory=script_folder,
                    script_params=script_params,
                    inputs=inputs,
                    compute_target=gpu_compute_target,
                    entry_script='train.py',
                    image_registry_details = my_registry,
                    custom_docker_image='omr:latest',
                    use_gpu=True,
                    user_managed=True)
    est.run_config.environment.python.user_managed_dependencies = True
    est.run_config.environment.python.interpreter_path = "/miniconda/envs/py37/bin/python"



    #establish container configuration using custom base image
    privateRegistry = aml_utils.getContainerRegistryDetails()
    inference_config = InferenceConfig(source_directory="./detect",
                                       runtime= "python", 
                                       entry_script="aml_score.py",
                                       enable_gpu=False,
                                       base_image="amlworkspaceom3760145996.azurecr.io/omr:latest",
                                       base_image_registry = privateRegistry)

    #deploy model via AKS deployment target
    deployment_config = AksWebservice.deploy_configuration(gpu_cores = 1, memory_gb = 1, auth_enabled = False)
    targetcluster = aml_utils.getGpuDeploymentTarget(ws)

    model_service = Model.deploy(workspace = ws,
                               name = "model_name,
                               models = [model],
                               inference_config = inference_config, 
                               deployment_config = deployment_config,
                               deployment_target = targetcluster)
user9427997
  • 43
  • 1
  • 4

1 Answers1

0

We haven't updated our documentation yet, but you can use an environment in your InferenceConfig :)

SDK documentation: https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py#definition

environment Environment

An environment object to use for the deployment. Doesn't have to be registered. A user should provide either this, or the other parameters, not both. The individual parameters will NOT serve as an override for the environment object. Exceptions include entry_script, source_directory and description.

The environment parameter is used in lieu of runtime/baseimage/baseregistry

Community
  • 1
  • 1
  • It looks like if I add an Environment definition in the InferenceConfig I cannot use a base image of my choice. Its one or the other. I would like to define a base image and specify the environment to use in case the image has multiple environments. – user9427997 Aug 09 '19 at 17:27
  • You can specify base image by changing the attributes in DockerSection of the Environment object. For example: env.docker.base_image = my-base-image – Roope Astala - MSFT Aug 09 '19 at 17:55