0

I am trying to integrate a MLFlow server with my Kubeflow cluster on GCP. To do this I create a an MLFlow deployment and expose it using a Loadbalancer.

The machine learning code is deployed as a pod on the Kubeflow cluster. The MLflow server IP:PORT is provided for logging parameters (e.g. hyper-parameters) and artifacts (e.g. models).

The issue is that the artifacts only get logged within the docker image (pod with the machine learning code). The parameter logging on the other hand works perfectly fine after providing the MLflow server IP:PORT.

Here is a screenshot. enter image description here

1 Answers1

0

The simple solution is to create and mount a volume to both ml mode pod and mlflow pod. This only shows that your files are not in a volume that is accessible for the UI. Please share the details about mlflow pod and ml model pod. Lets say Mod1 is a pod for your model Mlflowpod is where mlflow is deployed. You create a volume ‘Mlflow-artifacts’ attach that to both pods. And set that as default backend uri for mlflow service. This will definitely help. Since both pods are in the same cluster there is a very low probability that you are facing any loadbalanver or routing issues.

pushd93
  • 335
  • 4
  • 10