1

I want to solve two simultaneous equations using the scipy.optimize.minimize function in Python, specifically with the dog-leg trust-region algorithm. This requires me to specify the Jacobian of the problem by using scipy.optimize.approx_fprime, as suggested in one solution to my other post.

My MWE is:

import numpy as np
from scipy.integrate import quad
from scipy.optimize import minimize,approx_fprime

def myfunc(guess,a,b,c,d):

    # initial guesses
    x0 = guess[0]
    x1 = guess[1]

    # functions
    z0 = lambda x: c*np.sqrt(a**3*x0)*np.sin(x)/x0**b
    z1 = lambda x: np.cos(b*x0*x)/x1**a

    # numerical integration with 'quad'
    z0int = quad(z0,-200,200,epsabs=1e-8,epsrel=1e-6)[0] - 3.2*d
    z1int = quad(z1,-200,200,epsabs=1e-8,epsrel=1e-6)[0] + c

    return (z0int,z1int)

# constants
a = 0.2
b = 1.1
c = 2.5
d = 0.9

guess = np.array([0.3,0.02]) # initial guesses

myJac = approx_fprime(guess,myfunc,1e-9,a,b,c,d) # Jacobian

# minimisation, want to find x0 such that z0int=0 and z1int=0
xopt = minimize(myfunc,guess,args=(a,b,c,d),method='dogleg',jac=myJac)

print(xopt)

However I get an error TypeError: unsupported operand type(s) for -: 'tuple' and 'tuple'. I'm not really familiar with the Python optimization functions so could you please explain what is wrong and how to correct the code?

Community
  • 1
  • 1
Medulla Oblongata
  • 3,771
  • 8
  • 36
  • 75
  • Something doesn't look right in `myfunc()`. You define the lambda expressions `z0` and `z1` to be functions of `x`, but you do not use `x` in the values that they return. So they act like *constants*, independent of `x`. – Warren Weckesser Dec 07 '16 at 23:00
  • [`fsolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html) and [`root`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html) are the scipy functions for solving simultaneous equations. Why not try one of them? – Warren Weckesser Dec 07 '16 at 23:04
  • There is also a problem with your use of the `jac` argument. `jac` must be a *callable*--that is, a *function* `jac(x, a, b, c, d)` that returns the Jacobian at `x`. You have passed in a fixed numpy array. – Warren Weckesser Dec 07 '16 at 23:10
  • @WarrenWeckesser Thanks for the tips. I can solve my problem in Matlab using their [fsolve function](https://www.mathworks.com/help/optim/ug/fsolve.htm) but I'm trying to do this in Python (for various reasons). It looks like the dog-leg method works best so I'm trying to find something equivalent to this algorithm that gives the same results. FYI I'm ultimately trying to solve two simultaenous equations which are much more complicated than this MWE so I originally tried Python's `fsolve` and `root` -- neither gave me anything similar to Matlab's `fsolve`. – Medulla Oblongata Dec 08 '16 at 05:59
  • *"This requires me to specify the Jacobian of the problem by using scipy.optimize.approx_fprime, as suggested in one solution to my other post."* That answer to your other question is not correct. `approx_fprime` does not return a callable, and it does not work with vector functions. That is, it expects the function to return a scalar. – Warren Weckesser Dec 08 '16 at 07:00

3 Answers3

0

For minimisation, your function should return a single integer. You are returning a tuple, so that is the problem. The minimize function checks if the new value is lower then the old one (thus subtract) but it wants to subtract tuples instead of ints.

Change your code to only return a single integer from the function you wish to minimize

EDIT according to comments

def myfunc(guess,a,b,c,d):

    # initial guesses
    x0 = guess[0]
    x1 = guess[1]

    # functions
    z0 = lambda x: c*np.sqrt(a**3*x0)/x0**b
    z1 = lambda x: np.cos(b*x0)/x1**a

    # numerical integration with 'quad'
    z0int = quad(z0,-200,200,epsabs=1e-8,epsrel=1e-6)[0] - 3.2*d
    z1int = quad(z1,-200,200,epsabs=1e-8,epsrel=1e-6)[0] + c

    return (z0int,z1int) # <-- This should only return 1 single integer
Gábor Erdős
  • 3,599
  • 4
  • 24
  • 56
  • I'm not sure I understand how to do this. Can you show me where I should make these changes? I'm guessing they are minor changes but I just don't see how to do it. – Medulla Oblongata Dec 07 '16 at 12:56
  • I see. The problem that needs to be solved is a *simultaneous (nonlinear) equation* where I find the best values of `guess[0]` and `guess[1]`. The `myfunc` function can't return a single integer; I should have two outputs in order for the optimization function `minimize` to evaluate. – Medulla Oblongata Dec 07 '16 at 13:14
  • `scipy`s minimization function can only evaluate a single integer, so either you ned to change your function, or find another optimizer. – Gábor Erdős Dec 07 '16 at 13:16
  • Thanks. Do you know of any optimizers that use the dog-leg method (see original post) other than `minimize`? – Medulla Oblongata Dec 07 '16 at 13:18
  • Sadly i dont know any – Gábor Erdős Dec 07 '16 at 13:29
0

For solving a system of equations, you are aiming to minimize the sum of squares of left-hand sides. For this, you might be better off using least_squares instead of more general minimize. There is an example in the documentation, https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html

Under the hood it uses a trust-region type approach.

ev-br
  • 24,968
  • 9
  • 65
  • 78
0

You can rewrite the function to yield both elements, one at a time, instead of returning a tuple. One solution will be returned the 1st time, 3rd time, ..., odd times; the other solution returned every even time. You can then write a new function that you minimize instead; this new function will initialize 2 lists (evens + odds), using every other yielded element for each list. This new function will return some type of error metric with respect to both solutions such that its minimization yields the best 2 solutions.