1

I've built a SVD recommmendation model in AzureML Studio based on the example in the azure docs. If I deploy the model to a real-time Container Instance endpoint directly from the job it works. However, I'd like to create a real-time endpoint that uses the Managed Instance compute type.

The problem I'm running into is that when I create a new endpoint, I need to upload a score.py function. I've uploaded the score.py file that is automatically generated (attached below). However, the deployment then fails with:

  File "/azureml-envs/minimal/lib/python3.8/site-packages/azureml_inference_server_http/server/user_script.py", line 73, in load_script
    main_module_spec.loader.exec_module(user_module)
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/var/azureml-app/230210165016-1423170274/score.py", line 6, in <module>
    from azureml.studio.core.io.model_directory import ModelDirectory
ModuleNotFoundError: No module named 'azureml.studio'

So it looks like the azureml.studio package is not available on the environment? Since the azureml.studio package doesn't seem to be public, I don't know how I can get the score.py function to work. I tried creating a custom environment and adding the package to the conda dependencies, but that didn't help either.

The score.py file:

import os
import json
from collections import defaultdict
from pathlib import Path

from azureml.studio.core.io.model_directory import ModelDirectory
from azureml.studio.modules.recommendation.score_svd_recommender.score_svd_recommender import \
    ScoreSVDRecommenderModule, RecommenderPredictionKind
from azureml.studio.common.datatable.data_table import DataTable
from azureml.designer.serving.dagengine.utils import decode_nan
from azureml.designer.serving.dagengine.converter import create_dfd_from_dict


model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'trained_model_outputs')
schema_file_path = Path(model_path) / '_schema.json'
with open(schema_file_path) as fp:
    schema_data = json.load(fp)


def init():
    global model
    model = ModelDirectory.load(load_from_dir=model_path).model


def run(data):
    data = json.loads(data)
    input_entry = defaultdict(list)
    for row in data:
        for key, val in row.items():
            input_entry[key].append(decode_nan(val))

    data_frame_directory = create_dfd_from_dict(input_entry, schema_data)
    score_params = dict(
        learner=model,
        test_data=DataTable.from_dfd(data_frame_directory),
        training_data=None,
        prediction_kind=RecommenderPredictionKind.RatingPrediction)
    result_dfd, = ScoreSVDRecommenderModule().run(**score_params)
    result_df = result_dfd.data_frame
    return json.dumps(result_df.to_dict("list"))

I would appreciate any comments and ideas. I've been stuck on this for a while. Thanks!

picklepick
  • 1,370
  • 1
  • 11
  • 24
  • Check these Document if helps: [Deploy an AutoML model with an online endpoint](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/machine-learning/how-to-deploy-automl-endpoint.md) [Deploy machine learning models to online endpoints](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/machine-learning/how-to-deploy-online-endpoints.md) [Deploy MLflow models to online endpoint](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md) – Naveen Sharma Mar 07 '23 at 09:37

0 Answers0