0

I have five variables that I would like to subject to scipy.optimize.minimize in order to find the solution I am seeking in terms of A, B, C, D, and E. First, I imported minimize from scipy and defined the initial guesses (based on lab experiments, so they should be valid).

from scipy.optimize import minimize

A0 = 1.90
B0 = 6.40
C0 = 11.7
D0 = 3.70
E0 = 2.50

ABCDE0 = [A0, B0, C0, D0, E0]

Second, I defined the individual functions that comprise the objective function conveniently named Objective. I also tried to combine F, G, H, and I into one function without any luck, so I decided to leave it like this for the time being.

def F(abcde):
    a, b, c, d, e = abcde
    return c - (b ** 2) / (a - e)

def G(abcde):
    a, b, c, d, e = abcde
    return (4 * e * ((a - e) * c - b ** 2)) / (a * c - b ** 2)

def H(abcde):
    a, b, c, d, e = abcde
    return b / (2 * (a - e))

def I(abcde):
    a, b, c, d, e = abcde
    return (2 * e * b) / (a * c - b ** 2)

def Objective(abcde):
    return (F(abcde) / G(abcde)) / (H(abcde) / I(abcde))

Third, I defined the constraint (i.e. (F/G)/(H/I)=1) and also the bounds named bnds to be +/- 10% of the initial guesses for the sake of simplicity.

def constraint(x):
    F = x[0]
    G = x[1]
    H = x[2]
    I = x[3]
    return (F / G) / (H / I) - 1

con = {'type': 'eq', 'fun': constraint1}

min_per = 0.9
max_per = 1.1
bnds = ((A0*min_per, A0*max_per), (B0*min_per, B0*max_per), 
        (C0*min_per, C0*max_per), (D0*min_per, D0*max_per), 
        (E0*min_per, E0*max_per))

Fourth and finally, minimize provided me with a solution termed sol.

sol = minimize(Objective, ABCDE0, method='SLSQP', bounds=bnds, constraints=con1, options={'disp':True})

If sol was to be printed by print(sol), the following message would appear.

Positive directional derivative for linesearch    (Exit mode 8)
            Current function value: 1.0
            Iterations: 18
            Function evaluations: 188
            Gradient evaluations: 14
     fun: 1.0
     jac: array([ 0.00000000e+00,  1.49011612e-08, -7.45058060e-09,  0.00000000e+00,
        0.00000000e+00])
 message: 'Positive directional derivative for linesearch'
    nfev: 188
     nit: 18
    njev: 14
  status: 8
 success: False
       x: array([ 2.09      ,  5.76      , 10.53      ,  4.07      ,  2.50000277])

To my novice mind, constraint appears to be the problem but I am not sure to due lack of experience with minimize.

  • What I am doing wrong?
  • Why is it not executing successfully?
  • Is it better to use root_scalar as suggested by @fuglede in this scenario?
  • Is it possible to include all the individual functions into one function to avoid making a mess?

Please be aware that D0 and d are not included in any of the functions but is important in the grand schemes of things as one of the five independent variables.

naughty_waves
  • 265
  • 1
  • 13
  • 1
    If I'm understanding what you're trying to do correctly, your `constraint` should be a function of `a`, ...,`e`, and you would let `F = F([a, b, c, d, e])` instead (as an aside, note that using the name of a function for a variable as well is also asking for trouble). However, once you do so, what you effectively end up with is a constraint saying that the value of your objective is 1. That is, if you have any feasible solution, the objective will be 1, and the minimization is a bit dull. Is this what you want? If so, look into SciPy's root finding functionality. – fuglede Jul 18 '19 at 20:08
  • Thank you for your comment, @fuglede. My `constraint` (`(F/G)/(H/I)=1`) is based on a symmetry assumption, so I need to make my data, which may be anywhere from 0.95 to 1.05, to become exactly 1 by varying the `A`, `B`, `C`, `D`, and `E`. That is the reasoning behind the `constraint`. How can I implement `F = F([a, b, c, d, e])`? I can, of course, change the name to avoid confusion because I am in enough trouble as it is. – naughty_waves Jul 19 '19 at 07:03
  • 1
    But are the `F`, `G`, `H`, and `I` in your constraint the same as in the definition of your objective? That is, are you trying to force your objective to be equal to one? And if so, since you are just trying to find a root of `objective - 1`, is there any reason to prefer `minimize` over e.g. `root_scalar`? – fuglede Jul 19 '19 at 07:13
  • Yes, @fuglede, they are the same. I am also trying to force my objective to be equal to one, yes. No reason other than not knowing of the existence of `root_scalar` before you set me straight. So, in other words, you recommend to use `root_scalar` instead of `minimize` for scenarios such as the one I described? – naughty_waves Jul 19 '19 at 07:20
  • 1
    I just realized that `root_scalar` (unsurprisingly, given its name) only eats scalar functions, but there are vector equivalents, and yes, they'll probably perform better. However, I also just realized that `Objective(ABCDE0)` actually is 1, so it seems you already have a solution? – fuglede Jul 19 '19 at 11:20
  • 1
    In fact, it seems `Objective` is constantly equal to 1. – fuglede Jul 19 '19 at 11:22
  • @fuglede, I realized the same as well. For the procedure I am doing, I need to convert my data to `A`, `B`, `C`,`D`, and `E` before I run `Objective`. It appears to be self-compensating, as this is yet another example of me overcomplicating things. I feel foolish, and for that I apologize. Thank you, though. If it is any consolidation, I did learn a few things due to your comments. – naughty_waves Jul 19 '19 at 14:15
  • 1
    No worries; it's certainly not obvious from the given form that this would be the case. Let me attempt to consolidate these comments as an answer, so as to not have this appear as an unanswered question. – fuglede Jul 20 '19 at 16:42

1 Answers1

1

Several things are going on here:

Firstly, as a matter of taste, I would probably have left the functions F, ..., I as having multiple inputs to avoid having to unpack the list. The objective function does need a list as its arguments though; i.e. you could do something like

def F(a, b, c, d, e):
    return c - (b ** 2) / (a - e)

...

def objective(abcde):
    return (F(*abcde) / G(*abcde)) / (H(*abcde) / I(*abcde))

That's just style. More importantly, your constraint method won't quite do what you want: As I understand, you want to evaluate the functions F, ..., I on the inputs, but that never happens; instead, the value of the variable F (which is an unfortunate name as it shadows the name of the function) ends up being simply a. Instead, you would do something like

def constraint(x):
    return (F(*x) / G(*x)) / (H(*x) / I(*x)) - 1

Now, constraint(x) is nothing but objective(x) - 1, so your constraints ends up stating that your objective must be equal to 1 at a feasible solution. This means that there's really not much optimization going on at all: Any feasible solution is optimal. While a priori minimize will indeed attempt to find a feasible solution for you, chances are that you'll have more luck trying to use some of the root-finding capabilities of scipy.optimize as what you're doing is trying to find the roots of the function objective - 1.

Now finally, this actually turns out to be straightforward: Your function objective is equal to 1 wherever it's defined, so any input is a feasible solution: First, by writing F = ((a-e)c - (b**2))/(a-e), note that (a*c-b**2) / (4*e*(a-e)). Then, (F/G)*I = (2*e*b)/(4*e*(a-e)) = b/(2*(a-e)) = H, so 1 = (F/G)*(I/H) = (F/G)/(H/I).

fuglede
  • 17,388
  • 2
  • 54
  • 99
  • 1
    Thank you again, @fuglede! You taught me a lot via your previous comments that have now been articulated to perfection by this great summary. – naughty_waves Jul 24 '19 at 11:13