1

I'm using hyperas to optimize a function and it is not returning the best result. During the run the print out reads as follows

100%|██████████| 100/100 [7:01:47<00:00, 411.15s/it, best loss: 5.1005506645909895e-05]

but afterwards when I print the results of the best model I get

5.8413380939757486e-05

This has happened a couple of times now and I don't understand why. I wrote a reproducible example and I am getting the same problem.

def test_function():
    x={{uniform(-23,23)}}
    function=x**2+x

    return {'loss': function, 'status': STATUS_OK, 'model': function}

###just a dummy function to get the optimization to run, my real function uses real data
def data_example():
    print('skip')
    return [0,1,2]

trials=Trials()
#    trials=pickle.load(open(trials_file, "rb"))
print('started new set of optimization runs')

if __name__ == '__main__':
    best_run, best_model = optim.minimize(model=test_function,
                                          data=data_example,
                                          algo=tpe.suggest,
                                          trials=trials,
                                          max_evals=100)

print(best_run)    

Last time I ran this the status bar showed

100%|██████████| 100/100 [00:00<00:00, 498.77it/s, best loss: -0.24773021221244024]

and the print(best_run) showed

{'x': -0.5476422899067598}

why is my best_run result not lining with the smallest loss in the optimization run?

Novice
  • 855
  • 8
  • 17

1 Answers1

1

Have you considered that best_run and best loss are not the same thing?

best_run returns the argmin of your loss, which would indeed be x = -1/2 for f(x) = x**2+x and best loss returns the min value for it, which is f(-1/2) = -1/4.

Tinu
  • 2,432
  • 2
  • 8
  • 20
  • wow, that was definitely an oversight in my example code. I think best_run returns the input values to the function actually, but still, that's the problem there. I think I am still having an issue with my real example though. Let me check – Novice Oct 25 '19 at 08:15
  • Reran my code and I don't see the same issue. No clue what happened. Thanks for the answer though, definitely helped me – Novice Oct 25 '19 at 08:21
  • Actually still seeing the problem, but now that youve shown that my example code works I guess its something in my own code – Novice Oct 25 '19 at 08:34
  • If you could provide a little bit more info about your model, your real example, I maybe can help you. Right now, I don't see the problem tbh. Are you doing hyperparameter tuning of NNs by any chance? – Tinu Oct 25 '19 at 08:53
  • No, I'm building a stacked xgboost model that combines a few different types of models. I've realized that what is happening is that the output of optim.minimize is giving me the last output from the run, not the best performing one. not sure what to do from there – Novice Oct 25 '19 at 10:56