5

Currently working on deploying my MLflow Model in a Docker container. The Docker container is set up with all the necessary dependencies for the model so it seems redundant for MLflow to also then create/activate a conda environment for the model. Looking at the documentation (https://www.mlflow.org/docs/latest/cli.html#mlflow-models-serve) it says you can serve the model with the --no-conda flag and that MLflow will assume you are “running within a Conda environment with the necessary dependencies”. This solution is working for us when we run in any environment with necessary dependencies, not necessarily a Conda environment. Is this correct? Or do we absolutely need to have a Conda environment active when running with the --no-conda flag?

For example, I can create a virtualenv and, with the virtualenv active, serve the model locally using mlflow models serve -m [model/path] --no-conda. The model then performs properly, but the documentation makes it sound like this shouldn’t work because it explicitly calls for a Conda environment.

Conor
  • 111
  • 2
  • 6

1 Answers1

7

You do not need to have a Conda environment installed with the --no-conda option.

As described in the comment (Thanks @Nander Speersta) --no-conda is getting deprecated in newer versions of MLFlow for --env-manager=local.

From the Quickstart guide (https://www.mlflow.org/docs/latest/quickstart.html) it notes that it is fine as long as all dependencies are installed. Doesn't matter how you installed these dependencies (pipenv, poetry or pip).

Caveat being: this way you can't define dependencies for your project in MLFlow (since that uses conda to install these dependencies)

You should be able to safely continue your current practice.

Andreas Klintberg
  • 440
  • 1
  • 11
  • 29
  • 1
    Running `--no-conda` will cause issues in newer versions of mlflow: ```FutureWarning: `--no-conda` is deprecated and will be removed in a future MLflow release. Use `--env-manager=local` instead.``` – Nander Speerstra Jun 03 '22 at 09:49