0

runId generated in log_model call needs to be accessed in mlflow models serve

I am trying to run mlflow bare minimum to deploy custom models

1st step taken : I save the model using log_model observation: the artifacts are duly saved in mlruns

2nd step taken: i am able to serve using mlflow models serve -m runs: observation: the server is started at 5000

3rd step taken: i am able to run a curl invocation to predict observation: prediction returned

Question : How do i get the runId generated in Step1 to be passed to Step2 ie does the log_model

Please advise the recommended workflow for the above use case (whether tracking/mlflow server) need to be used etc..

mlflow.pyfunc.log_model(artifact_path="artifacts", python_model=add5_model)

Question: how to access the runId returned by the above log_model to call in mlflow models serve -m runs

ForRace
  • 96
  • 7

2 Answers2

0

The easiest way to access the run ID of the model is to check the MLflow tracking server. The unique run ID will be specified at the top of the page.

Location of run ID in tracking server

To serve the model from that run ID, use mlflow models serve from the MLflow CLI:

mlflow models serve -m runs:/94709644a8834ade8e6deb67b420c157/artifacts/model

Docs page here.

It's also worth noting that there are functions to work with runs in a given experiment using the CLI. Docs for that here.

Community
  • 1
  • 1
Raphael K
  • 2,265
  • 1
  • 16
  • 23
0

If I understand the intent behind your question, you're interested in knowing the runid in order to be able to serve it programmatically -- With the model registry in MLFlow v1.5, you can register models (and have version numbers and lifecycle stages such as Staging, Production), and serve them without a run id via a new model URI scheme -

models:/<model_name>/<model_version>

models:/<model_name>/<stage>

To train and register a run, pass the registered_model_name=<registered model name> argument to the log_model() call.

Assuming you had tagged a version as Production, you could then serve the add5 model with mlflow models serve -m models:/add5/Production instead of specifying a run id.

Krish
  • 110
  • 1
  • 9