1

When I use Scipy minimize in the example below, it finds a non minimal solution - can someone please help me modify this to find the minimum?

I have four constraints: 1) guesses of guess array must sum to one, 2) guesses must be > 0, and 3) any guess from the initial guess array that is over the threshold must be set equal to the threshold and held constant.
Edit: 4th constraint - no guess can be greater than the threshold.

I didn't use constraints for the last 3 constraints I have. For constraint 2, I used a bounds. For constraint 3, I did that in the performMinimize method before calling scipy's minimize method.

from scipy.optimize import minimize
import numpy as np
from numpy import array

def objective(initialGuesses, threshold):
    overThreshold = initialGuesses[np.where((threshold < initialGuesses[:]))]
    underThreshold = initialGuesses[np.where((threshold >= initialGuesses[:]))]

    overThreshold[:] = threshold
    sumUnderThreshold = np.sum(underThreshold)
    oppositeSumOverTreshold = 1 - np.sum(overThreshold)

    transformedGuess = underThreshold / sumUnderThreshold * oppositeSumOverTreshold

    thresholdedResults = np.concatenate((overThreshold, transformedGuess))

    squaredError = np.square(thresholdedResults - initialGuesses) / initialGuesses

    return np.sum(squaredError)


def performMinimizeSo(initialGuesses, threshold):
    overThreshold = initialGuesses[np.where((threshold < initialGuesses[:]))]
    overThreshold[:] = threshold
    underThreshold = initialGuesses[np.where((threshold >= initialGuesses[:]))]

    # Says one minus the sum of all variables minus the sum of weights over the threshold must be zero
    cons = ({'type': 'eq', 'fun': lambda x: 1 - sum(x) - sum(overThreshold)})

    minimum = minimize(objective, underThreshold, args=(threshold), method='SLSQP',
                       constraints=cons,
                       bounds=[(0, None) for i in range(len(underThreshold))],
                       )

    allGuesses = np.append(overThreshold, minimum.x)

    return allGuesses


def testCaseForSo():
    initialGuesses = array([
        [0.1],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.05],
        [0.025],
        [0.025]])

    threshold = .09
    output = (performMinimizeSo(initialGuesses, threshold))
    print(output)

testCaseForSo()

The minimal answer, as found by excel, is this:

0.09
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.050555556
0.025277778
0.025277778

Scipy minimize thinks this is the answer, which is close but not the correct minimum:

0.09
0.050526315
0.050526315
0.050526315
0.050526315
0.050526315
0.050526315
0.050526315
0.050526315
0.050526315
0.050526315
0.050526317
0.050526317
0.050526317
0.050526317
0.050526317
0.050526317
0.050526317
0.025526317
0.025526317

Here is what the attributes of the output look like after running scipy.minimize (as you can see, scipy thinks it found the minimum even though we know it didn't):

minimum.sucess == True
minimum.status == 0
minimum.message == 'Optimization terminated successfully'
Curiosity
  • 149
  • 1
  • 2
  • 11
  • 1
    When you use `minimize`, *always* check the `success` attribute of the object that is returned. In your code, after `minimum = minimize(...)`, add `if not minimum.success: print(minimum.status, minimum.message)`. Then include the output in the question here. – Warren Weckesser Sep 27 '18 at 00:25
  • You may want to provide gradients. With finite differences you loose half the precision. – Erwin Kalvelagen Sep 27 '18 at 13:47
  • @Warren Weckesser - thanks. Attributes have been added in the latest edit. – Curiosity Sep 27 '18 at 14:50
  • @Erwin Kalvelagen - that sounds like it might solve my problem. Do you know how to go about providing gradients? – Curiosity Sep 27 '18 at 14:59
  • Having you tried adjusting the numerical tolerances, and see if that brings you sufficiently close to where you want to be? – Eelco Hoogendoorn Sep 28 '18 at 14:46
  • @EelcoHoogendoorn Yes, I did try adjusting numerical tolerance, as well as other parameters. Some of them can get me closer to the minimum, but the problem is that I need to find the minimum, not something close to the minimum. – Curiosity Sep 28 '18 at 14:57
  • What makes you believe the exact solution exists in a floating point representation? – Eelco Hoogendoorn Sep 28 '18 at 18:06
  • @EelcoHoogendoorn, you raise a good point. I assumed it would be possible, since it just seems like a very simple, straight forward optimization problem. I definitely could be wrong about that. It just seems like I should NOT be able to beat the optimizer so easily with excel. – Curiosity Sep 28 '18 at 18:10
  • Excel might be using variable-precision arithmetic; which scipy does not support? That said numerical error does not seem like a full explanation; the problem you are solving does not contain any precision-benders, yet it is nowhere close to machine precision... dunno – Eelco Hoogendoorn Sep 28 '18 at 18:23
  • Looking closer at your problem... I dont think there is one. Your problem is simply very much underconstrained. The numbers you are getting are all < 0.9, and sum to one. There are many such numbers however; so thats reason enough to differ from some different implementation. – Eelco Hoogendoorn Sep 28 '18 at 18:46
  • @EelcoHoogendoorn So minimize just tries to satisfy the constraints? I thought minimize would.... minimize? Is there some kind of constraint I could add to reach the solution I'm seeking, or is that not really knowable? – Curiosity Sep 28 '18 at 19:17

0 Answers0