TL;DR: How to minimize a fairly smooth function that returns an integer value (not a float)?
>>> import scipy.optimize as opt
>>> opt.fmin(lambda (x,y): (0.1*x**2+0.1*(y**2)), (-10, 9))
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 49
Function evaluations: 92
array([ -3.23188819e-05, -1.45087583e-06])
>>> opt.fmin(lambda (x,y): int(0.1*x**2+0.1*(y**2)), (-10, 9))
Optimization terminated successfully.
Current function value: 17.000000
Iterations: 17
Function evaluations: 60
array([-9.5 , 9.45])
Trying to minimize a function that accepts floating point parameters but returns an integer, I'm running into a problem that the solver terminates immediately. This effect is demonstrated in the examples above - notice that the when the value returned is rounded as an int, the evaluation terminates prematurely.
I assume that this is happening because it detects no change in the derivative, i.e. the first time it changes a parameter, the change it makes is too small and the difference between first result and second is 0.00000000000, incorrectly indicating a minimum has been found.
I've had better luck with optimize.anneal, but despite its integer valued return I've plotted some regions of the function in three dimensions and it's actually pretty smooth. Therefore, I was hoping that when a derivative-aware minimizer would work better.
I've reverted to manually graphing to explore the space, but I'd like to introduce a couple more parameters so it'd be great if I could get this working.
The function I'm trying to minimize can't be made to return a float. It's counting the number of successful hits from a cross-validation, and I'm having the optimizer alter parameters on the model.
Any ideas?
UPDATE
Found a similar question: How to force larger steps on scipy.optimize functions?