1

While Deploying a Machine Learning Model using the AZ CLI, the command

az ml model deploy --name $(AKS_DEPLOYMENT_NAME) 
--model '$(MODEL_NAME):$(get_model.MODEL_VERSION)' \
--compute-target $(AKS_COMPUTE_NAME) \
--ic inference_config.yml \
--dc deployment_config_aks.yml \
-g $(RESOURCE_GROUP) --workspace-name $(WORKSPACE_NAME) \
--overwrite -v

Will use the inference_config.yml and deployment_config_aks.yml file to deploy the model.

However, if we are using the azureml-sdk in Python, the commands are:

from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies 

conda_deps = CondaDependencies.create(conda_packages=['numpy','scikit-learn==0.19.1','scipy'], #for-example
pip_packages=['azureml-defaults', 'inference-schema']) #for-example
myenv = Environment(name='myenv') 
myenv.python.conda_dependencies = conda_deps

from azureml.core.model import InferenceConfig

inf_config = InferenceConfig(entry_script='score.py', environment=myenv)


aks_config = AksWebservice.deploy_configuration()


aks_service_name ='some-name'

aks_service = Model.deploy(workspace=ws,
                           name=aks_service_name,
                           models=[model],
                           inference_config=inf_config,
                           deployment_config=aks_config,
                           deployment_target=aks_target)

How exactly can we use a Conda Dependencies file conda_dependencies.yml, Inference_Config File inference_config.yml and Deployment Config File deployment_config_aks.yml to create objects inf_config and aks_config to use in Python? Is there a .from_file() option to use the YAML definitions? My use case is to create Python Steps in Azure Pipelines as an MLOps workflow!

Bright Ran
  • 45
  • 3
Anirban Saha
  • 1,350
  • 2
  • 10
  • 38

3 Answers3

3

Those can be downloaded from Azure ML to pass into the Azure ML SDK in Python.

So using this code to deploy:

from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
from azureml.core.environment import Environment

inference_config = InferenceConfig(entry_script=script_file_name, environment=myenv)

aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1, 
                                               memory_gb = 1, 
                                               description = 'Iris classification service')

aci_service_name = 'automl-sample-bankmarketing-all'
aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)
aci_service.wait_for_deployment(True)

The script file and environment files can be downloaded by the AutoML model.

from azureml.core.environment import Environment
from azureml.automl.core.shared import constants
best_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml')
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")

script_file_name = 'inference/score.py'
best_run.download_file('outputs/scoring_file_v_1_0_0.py', 'inference/score.py')

I explain more in this video and the full notebook is here

Jon
  • 2,644
  • 1
  • 22
  • 31
  • 2
    1. Your answer addresses a part of my question. `myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")` will help me create the environment file and the following Inference_Config. 2. The score.py can be only downloaded for AutoML runs which creates these scoring_files too. For a manual experiment we have to write this ourself. May be Model-get_outputs() (something close to this, will help us get the score.py). Thanks for this, it helps me move ahead! – Anirban Saha Mar 02 '21 at 12:34
1

UPDATE "I get what you're asking for now. You want a method on the InferenceConfig class where you can pass a .yml just as you can with a CondaEnvironment class, correct? This isn't supported, but I agree it should be a feature as it will make adoption of AML SDK v2 easier for users. A workaround might be to read the yaml into a python dictionary and plug those params into the class creation call...

Configuring model deployment configuration with YAML is not currently supported in the Python SDK but is something that should be entering the YAML-based private preview coming soon. Here's an almost working version of what you're looking for that will be available for public preview shortly. Follow the corresponding GitHub repo for more info.

az ml endpoint create --file batchendpoint.yml

batchendpoint.yml

name: myBatchEndpoint
type: batch
auth_mode: AMLToken
deployments:
  blue:
    model: azureml:models/sklearn_regression_model:1
    code_configuration:
      code:
        directory: ./endpoint
      scoring_script: ./test.py
    environment: azureml:AzureML-Minimal/versions/1
    scale_settings:
      node_count: 1
    batch_settings:
      partitioning_scheme:
        mini_batch_size: 5
      output_configuration:
        output_action: AppendRow
        append_row_file_name: append_row.txt
      retry_settings:
        maximum_retries: 3
        timeout_in_seconds: 30
      error_threshold: 10
      logging_level: info
    compute:
      target: azureml:cpu-cluster
Anders Swanson
  • 3,637
  • 1
  • 18
  • 43
  • 1
    As far as I understand, this is using the CLI to deploy using a YAML. I am interested in using the Python SDK to create the inference_configuration and deployment_configuration files from the corresponding YAML files. However, this seems to be interesting and straightforward way of getting it done using CLI. – Anirban Saha Mar 03 '21 at 05:05
  • @AnirbanSaha I added more info. You should def create a UserVoice item for your request, i'd certainly upvote it! https://feedback.azure.com/forums/257792-machine-learning – Anders Swanson Mar 11 '21 at 07:44
0

You can reference to the following tutorials to try setting up the Python script to create and deploy the Azure Machine Learning.

After completing the Python script, you can try executing the script in your YAML pipeline on Azure DevOps.

Bright Ran-MSFT
  • 5,190
  • 1
  • 5
  • 12
  • After completing the Python script, you can try executing the script in your YAML pipeline on Azure DevOps. It doesn't address the question. I want to use `inference_config` YAML files and `deployment_config` YAML files as we do in AZ CLI. – Anirban Saha Mar 03 '21 at 17:52
  • Similarly to **@Jon**'s suggestion, how about directly call the `inference_config.yml` and `deployment_config_aks.yml` in the Python scripts? Maybe you can have a try to see if it work. – Bright Ran-MSFT Mar 04 '21 at 06:40
  • Hi @AnirbanSaha, how are things going? Is **@Jon**'s suggestion helpful to you? – Bright Ran-MSFT Mar 11 '21 at 08:34
  • Hi @Bright Ran-MSFT, calling the files directly does not work as the SDK expects the arguments to belong to the particular classes. I am using the CLI as using these with python directly are not possible. The only possible way is to create an environment using conda_deps and then inf_config and dep_config need to be defined separately. – Anirban Saha Mar 11 '21 at 17:49
  • Hi @AnirbanSaha, unfortunately, we seem can't find any method to do so like as your demand. – Bright Ran-MSFT Mar 17 '21 at 07:25