-1

im getting different output for feature importance, when I run the automl in azure, google and h2o. even though the data is same and all the features are also same. what would be the reason for it. is there any other method to compare the models

molbdnilo
  • 64,751
  • 3
  • 43
  • 82
g2021
  • 11

1 Answers1

1

This is expected behavior, H2OAutoML is not reproducible by default. To make H2OAutoML reproducible you need to set max_models, seed and exclude DeepLearning (exclude_algos=["DeepLearning"]) and make sure max_runtime_secs is not set.

To compare models you can use model explanations or you can just compare the model metrics.

Tomáš Frýda
  • 546
  • 3
  • 8