0

I want to optimize my function Strategy2(alpha, beta), which performs a betting strategy on some data, starting with a wallet value of 20,000, and returns a new wallet value.

So I need to find the optimum alpha & beta values that maximize the returned value.

A quick google suggests that scipy is the way to go, but I am struggling to implement it.

Before attempting this optimization approach I took the approach of fixing alpha, and then finding the optimum beta, then doing the same with fixing beta and finding the optimum alpha.

That approach gave an optimum of Strategy2(23,3) which returned 24,650.

Here is my attempt to implement the minimize method found in the module scipy:

import numpy as np
from scipy.optimize import minimize
bnds = ((None, None), (None, None))
param = [0, 0]
f = lambda param: Strategy2(param[0], param[1])
param_init = 0, 100
param = minimize(f, param_init, method='SLSQP', bounds=bnds).x
print(param)

As you can see, I don't really know what I am doing, indeed this simply just returns

[  0. 100.]

with a final wallet value of 10,705. Which is clearly less than 24,650. So something is clearly not working.

How can I get it so I maximize Strategy2(alpha, beta)? Ideally I want to vary alpha from 0 to 100, and vary beta from 1 to 15.

Thanks in advance.

EDIT: My reasoning for the above code was that I was just trying to adapt the following working code:

import numpy as np
from scipy.optimize import lsq_linear, minimize
bnds = ((None, None), (None, None))
fun = lambda param: np.linalg.norm(-np.exp(param[0]) + param[1])
param_init = -4, 4
param = minimize(fun, param_init, method='SLSQP', bounds=bnds).x

which correctly minimise the above function.

If there is a better way to maximise my function then please let me know.

gr12345
  • 23
  • 6
  • _here is my attempt to implement scipy_ <- `scipy` is a library, or a module, containing various data structures and functions. I am not sure what you mean when you say _...implement scipiy..._. Additionally, there is not much we can test here given the code you posted. Please post a minimal, verifiable, reproducible example (https://stackoverflow.com/help/minimal-reproducible-example). – artemis Nov 12 '19 at 17:39
  • I've edited the comment to clarify what I meant regarding scipy. – gr12345 Nov 12 '19 at 18:31

1 Answers1

3

If your Strategy2(alpha, beta) returns the wallet value then your objective function f should be inverted. Minimizing f should be equivalent to maximizing Strategy2(alpha, beta), which is not the case in your code. I suggest using:

bnds = ((None, None), (None, None))
param = [0, 0]
f = lambda param: 1 / Strategy2(param[0], param[1])
param_init = 0, 100
param = minimize(f, param_init, method='SLSQP', bounds=bnds).x
print(param)

In this case, if f is minimized thenStrategy2() is maximized. I also suggest using some bounds to limit your search space (good for speed and efficiency).

SuperKogito
  • 2,998
  • 3
  • 16
  • 37
  • Thanks, I've edited the code so that the function is inverted, but it still returns: [ 0. 100.] It appears to just be running the function with the parameters [0,100] repeatedly but not varying them, and so just returns [0,100] as the optimal value – gr12345 Nov 12 '19 at 18:57
  • Is there a rationale for why 1/f is better than -f when trying to maximize a function? I've always wondered which is better. – Bill Aug 30 '20 at 20:29
  • The choice of an obj. func is very decisive. I cannot explain it perfectly, cuz I am also not very knowledgeable in this topic. But to give you a quick idea; most optimizations rely on gradient decent, the Hessian and the Jacobian. To keep it simple, consider a 1d-function where the equivalents are the derivative funcs. – SuperKogito Sep 02 '20 at 12:13
  • Take f(x) = 1-x, in the case of -f(x), the 1st derivative that will be used to advance the optimization is a constant =-1 where as in the case of 1/f(x) it is = 1/(1-x)^2. I think that the 2nd variant would probably lead to a more sensitive optimization, and the 1st might even miss the optimum. – SuperKogito Sep 02 '20 at 12:13