I use Hyperopt to select parameters of XGBoost model in Python 3.7.
As objective I use the function which returns several values, including loss:
def objective(params, n_folds = nfold):
...
return {'loss': loss, 'params': params, 'iteration': ITERATION,
'estimators': n_estimators,
'train_time': run_time, 'status': STATUS_OK}
In my case, 'loss' - is not inclusive parameter in cross-validation (e.g. 'auc'), but it is self-made metric. I hope, this will be decreased by iterations, but it continues to vary, like in random approach.
Moreover, when I see fmin calculation status (example - below). I see quite different metric is decreasing.
XGB CV model report
Best test-auc-mean 98.51% (std: 0.21%)
83%|███████████████████████████████████████▏ | 25/30 [08:26<07:23, 88.64s/trial, best loss: 0.22099999999999997]
How can I know, what is the real metric which is decreased with Hyperopt?