We have a project following essentially this
docker example with the only difference that we created a custom model similar to this whose code lies in a directory called forecast
. We succeeded in running the model with mlflow run
. The problem arises when we try to serve the model. After doing
mlflow models build-docker -m "runs:/my-run-id/my-model" -n "my-image-name"
we fail running the container with
docker run -p 5001:8080 "my-image-name"
with the following error:
ModuleNotFoundError: No module named 'forecast'
It seems that the docker image is not aware of the source code defining our custom model class.
With Conda environnement the problem does not arise thanks to the code_path
argument in mlflow.pyfunc.log_model
.
Our Dockerfile is very basic, with just FROM continuumio/miniconda3:4.7.12, RUN pip install {model_dependencies}
.
How to let the docker image know about the source code for deserialising the model and run it?