0

I'm trying to deploy a mlflow model locally using azure sdk for python. I'm following this example https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/mlflow/online-endpoints-deploy-mlflow-model.ipynb and this https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb.

My dir structure looks like this:

 - keen_test
     +- model
     |    +- artifacts
     |    |    - _model_impl_0s5d99i3.pt
     |    |    - settings.json
     |    +- conda.yaml
     |    +- MLmodel
     |    +- python_env.yaml
     |    +- python_model.pkl
     |    '- requirements.txt
     '- deploy-keen.ipynb

MLmodel file:

artifact_path: model
flavors:
  python_function:
    artifacts:
      model:
        path: artifacts/_model_impl_0s5d99i3.pt
        # uri: /mnt/azureml/cr/j/1393df3add7949989e16b359b8b4fd0c/exe/wd/_model_impl_0s5d99i3.pt
      settings:
        path: artifacts/settings.json
        # uri: /mnt/azureml/cr/j/1393df3add7949989e16b359b8b4fd0c/exe/wd/tmpdy7crhkb/settings.json
    cloudpickle_version: 2.2.1
    env:
      conda: conda.yaml
      virtualenv: python_env.yaml
    loader_module: mlflow.pyfunc.model
    python_model: python_model.pkl
    python_version: 3.8.10
mlflow_version: 2.2.2
model_uuid: 8fba816341fe4ddabac63e552e62874a
run_id: keen_drain_w43g3fq4t6_HD_1
signature:
  inputs: '[{"name": "image", "type": "string"}]'
  outputs: '[{"name": "filename", "type": "string"}, {"name": "boxes", "type": "string"}]'
utc_time_created: '2023-05-25 22:11:54.553781'

For deployment I use the following commands:

# create a blue deployment
model = Model(
    path="keen_test/model",
    type="mlflow_model",
    description="my sample mlflow model",
)

blue_deployment = ManagedOnlineDeployment(
    name="blue",
    endpoint_name=online_endpoint_name,
    model=model,
    instance_type="Standard_F4s_v2",
    instance_count=1,
)

When I try to run this:

ml_client.online_deployments.begin_create_or_update(blue_deployment, local=True)

I get the error:

RequiredLocalArtifactsNotFoundError: ("Local endpoints only support local artifacts. '%s' did not contain required local artifact '%s' of type '%s'.", 'Local deployment (endpoint-06221317698387 / blue)', 'environment.image or environment.build.path', "")

I tried to modify the artifact_path in MLmodel configuration, but nothing worked. What should I modify in my configuration to make local deployment working? Do You have any ideas and/or experience with local deployment of mlflow models with azure python sdk?

Jakub Małecki
  • 483
  • 4
  • 14

1 Answers1

0

I tried your code in my environment and data . I got same error like you.

enter image description here

enter image description here.

That is because, for local deployment you need to pass environment and scoring script. So that docker image is built on that environment and provisioning takes place.

Use below code providing environment and scoring script.

endpoint_name  =  "endpoint-local-reg"

deployment  =  ManagedOnlineDeployment(
    name="blue",
    endpoint_name=endpoint_name,
    model=Model(path="../model-1/model/sklearn_regression_model.pkl"),
    code_configuration=CodeConfiguration(
        code="../model-1/onlinescoring", scoring_script="score.py"
        ),
    environment=Environment(
            conda_file="../model-1/environment/conda.yaml",
            image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04",
            ),
    instance_type="Standard_DS3_v2",
    instance_count=1,
)
            
deployment  =  ml_client.online_deployments.begin_create_or_update(
deployment, local=True)

enter image description here

Creates a docker image and deploys it.

enter image description here

You can see in docker it will be created and you can also see it is running in 3 different ports.

endpoint  =  ml_client.online_endpoints.get(name=endpoint_name, local=True)
print(endpoint)

enter image description here

Output:

enter image description here

JayashankarGS
  • 1,501
  • 2
  • 2
  • 6
  • Thanks for Yor answer, but this is not exactly what I was asking for. Please notice my model is not a sklearn pickle. My model, with all configuration is defined as MLmodel. And according to the documentation I found mlflow takes care of installing all dependencies and environment configuration. It's all orchestrated in the MLmodel file. But I'll give a try to Your answer anyway. – Jakub Małecki Jun 24 '23 at 19:41
  • As I can see in your MLmodel, environment image and scoring script is not defined. Try adding what type of image and scoring script either in Mlmodel or pass it to deployment function. – JayashankarGS Jun 25 '23 at 04:52
  • The model type is already there, it was passed as type="mlflow_model" to Model constructor, see my code. Environment and scoring script - well, according to Microsoft documentation "When you deploy a MLflow model to managed online endpoint, scoring script and environment is generated for you." – Jakub Małecki Jun 26 '23 at 06:53
  • you need to give scoring script and image like this `mcr.microsoft.com/azureml/minimal-ubuntu18.04-py37-cpu-inference:latest` for local deployment. – JayashankarGS Jun 26 '23 at 10:37
  • Try like this once `blue_deployment = ManagedOnlineDeployment( name="blue", endpoint_name="endpoint-26-test", model=model, instance_type="Standard_F4s_v2", instance_count=1, environment=Environment(image="mcr.microsoft.com/azureml/minimal-ubuntu18.04-py37-cpu-inference:latest") )` and deploy it locally. If any error occurs go to container logs and see. – JayashankarGS Jun 26 '23 at 10:45
  • Specifying the environment helped partially, i.e. I was able to build the image. But it's still not working - when the container starts it complains and suggests me I should debug my scoring script. And there is no scoring script and I don't even want to write any. Again, Azure documentation tells "scoring_script (...) is generated for you." For me this seems to be a bug or at least a gap in documentation. Just to compare I deployed the model natively using mlflow and no scoring script was required. – Jakub Małecki Jun 26 '23 at 20:39
  • Yeah. This is was I am trying to tell you. If you did not provide scoring script the container exits. In managed online endpoint everything is managed but locally it's seems we need to manage. – JayashankarGS Jun 27 '23 at 01:50