3

I'd used MLflow and logged parameters using the function below (from pydataberlin).

def train(alpha=0.5, l1_ratio=0.5):
    # train a model with given parameters
    warnings.filterwarnings("ignore")
    np.random.seed(40)

    # Read the wine-quality csv file (make sure you're running this from the root of MLflow!)
    data_path = "data/wine-quality.csv"
    train_x, train_y, test_x, test_y = load_data(data_path)

    # Useful for multiple runs (only doing one run in this sample notebook)    
    with mlflow.start_run():
        # Execute ElasticNet
        lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
        lr.fit(train_x, train_y)

        # Evaluate Metrics
        predicted_qualities = lr.predict(test_x)
        (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)

        # Print out metrics
        print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
        print("  RMSE: %s" % rmse)
        print("  MAE: %s" % mae)
        print("  R2: %s" % r2)

        # Log parameter, metrics, and model to MLflow
        mlflow.log_param(key="alpha", value=alpha)
        mlflow.log_param(key="l1_ratio", value=l1_ratio)
        mlflow.log_metric(key="rmse", value=rmse)
        mlflow.log_metrics({"mae": mae, "r2": r2})
        mlflow.log_artifact(data_path)
        print("Save to: {}".format(mlflow.get_artifact_uri()))
        
        mlflow.sklearn.log_model(lr, "model")

Once I run train() with its parameters, in UI I cannot see Artifacts, but I can see models and its parameters and Metric.

In artifact tab it's written No Artifacts Recorded Use the log artifact APIs to store file outputs from MLflow runs. But in finder in models folders all Artifacts existe with models Pickle.

help

pdaawr
  • 436
  • 7
  • 16
abdoulsn
  • 842
  • 2
  • 16
  • 32

7 Answers7

6

Is this code not being run locally? Are you moving the mlruns folder perhaps? I'd suggest checking the artifact URI present in the meta.yaml files. If the path there is incorrect, such issues might come up.

aebeljs
  • 61
  • 2
  • Yes I run it locally. But the example on Pycon Conf I was following run it locally and it works. – abdoulsn Jul 01 '20 at 12:37
  • 3
    @abdoulsn , inside your mlruns folder, there would be a folder for the experiment. It could be named 0 or 1 or so on. Say, it's 0. In it you can find a meta.yaml file. Open it and check what the artifact_location is specified as. It should be mlruns/0. If it's not, make it this. Similarly, each of the run folders within this folder will have a meta.yaml file. Check the artifact_uri in those files too. It should be of the format mlruns/0//artifacts. Change it to this if needed. Ensuring this made it work for me. – aebeljs Jul 04 '20 at 19:39
5

Had a similar issue. In my case, I solved it by running mlflow ui inside the mlruns directory of your experiment.

See the full discussion on Github here

Hope it helps!

Cristobal
  • 309
  • 1
  • 6
1

I had the same problem (for mlflow.pytorch). For me it is fixed by replacing log_model() and log_atrifacts().

So the one that logged the artifact is:

mlflow.log_metric("metric name", [metric value])
mlflow.pytorch.log_model(model, "model")
mlflow.log_artifacts(output_dir)

Besides, for ui in terminal, cd to the directory where mlruns is. For example if the location of the mlruns is ...\your-project\mlruns:

cd ...\your-project

go to the environment where mlflow is installed.

...\your-project> conda activate [myenv]

Then, run mlflow ui

(myenv) ...\your-project> mlflow ui
Maryam Bahrami
  • 1,056
  • 9
  • 18
1

I had a similar problem. After I changed the script folder, my UI is not showing the new runs.

The solution that worked for me is to stop all the MLflow UI before starting a new UI, in case you are changing the folder.

1

I am running the same Python code in my Jupyter Notebook hosted locally, and the issue was solved for me when I ran mlflow ui in the directory which contains my Jupyter Notebook.

0

I had this issue when running mlflow server and storing artifacts in S3. Was able to fix by installing boto3

Fernando Wittmann
  • 1,991
  • 20
  • 16
0

I too had issues of ML Flow UI not showing any data even though they were created under the tracking URI. Hope the below helps someone.

In my case I just created a folder under my project directory mlflow.set_tracking_uri("./model_metrics") And below for activating mlflow. experiment_id= mlflow.start_run()

Start an MLflow run

with experiment_id:

capture your logs

Run MLFlow UI command along with --backend-store-uri=./model_metrics.

Siva Dorai
  • 63
  • 5