0

I use Hyperopt to select parameters of XGBoost model in Python 3.7.

As objective I use the function which returns several values, including loss:

def objective(params, n_folds = nfold):
...
    return {'loss': loss, 'params': params, 'iteration': ITERATION,
            'estimators': n_estimators, 
            'train_time': run_time, 'status': STATUS_OK}

In my case, 'loss' - is not inclusive parameter in cross-validation (e.g. 'auc'), but it is self-made metric. I hope, this will be decreased by iterations, but it continues to vary, like in random approach.

Moreover, when I see fmin calculation status (example - below). I see quite different metric is decreasing.

XGB CV model report                                                                                                    
    Best test-auc-mean 98.51% (std: 0.21%)
 83%|███████████████████████████████████████▏       | 25/30 [08:26<07:23, 88.64s/trial, best loss: 0.22099999999999997] 

How can I know, what is the real metric which is decreased with Hyperopt?

Alex Ivanov
  • 657
  • 1
  • 8
  • 17

1 Answers1

0

Best loss below - is my metric. I was confused because it shows not current metric value, but always the best one. In addition, the algorithm is trying to improve it. However, it is possible to see improvements in metric after not 30, but several hundreds of iterations.

XGB CV model report
Best test-auc-mean 98.51% (std: 0.21%) 83%|███████████████████████████████████████▏ | 25/30 [08:26<07:23, 88.64s/trial, best loss: 0.22099999999999997]

Alex Ivanov
  • 657
  • 1
  • 8
  • 17