-2

cvxpy has a very neat way to write out the optimisation form without worrying too much about converting it into a "standard" matrix form as this is done internally somehow. Best to explain with an example:

def cvxpy_implementation():

    var1 = cp.Variable()
    var2 = cp.Variable()

    constraints = [
            var1 <= 3,
            var2 >= 2
    ]

    obj_fun = cp.Minimize(var1**2 + var2**2)

    problem = cp.Problem(obj_fun, constraints)
    problem.solve()

    return var1.value, var2.value


def scipy_implementation1():

    A = np.diag(np.ones(2))
    lb = np.array([-np.inf, 2])
    ub = np.array([3, np.inf])

    con = LinearConstraint(A, lb, ub)

    def obj_fun(x):
        return (x**2).sum()
    result = minimize(obj_fun, [0, 0], constraints=con)
    return result.x

def scipy_implementation2():

    con = [
        {'type': 'ineq', 'fun': lambda x: 3 - x[0]},
        {'type': 'ineq', 'fun': lambda x: x[1] - 2},]

    def obj_fun(x):
        return (x**2).sum()

    result = minimize(obj_fun, [0, 0], constraints=con)
    return result.x

All of the above give the correct result but the cvxpy implementation is much "easier" to write out, specifically I don't have to worry about the inequalities and can name variables useful thinks when writing out the inequalities. Compare that to the scipy1 and scipy2 implementations where in the first case I have to write out these extra infs and in the second case I have to remember which variable is which. You can imagine a case where I have 100 variables and while concatenating them will ultimately need to be done I'd like to be able to write it out like in cvxpy.

Question: Has anyone implemented this for scipy? or is there an alternative library that could make this work?

thank you

evan54
  • 3,585
  • 5
  • 34
  • 61
  • There is nothing like that for scipy. You might be able to find similar software when targeting some specificic optimization problem (pulp, pyomo). But first you need to understand first, that those examples above are not to be compared! Boiled down, this is convexity-enforcing by construction with convex-assuming solvers (core use-case of cvxpy) versus general nonlinear programming only assumung twice differentiability (scipy minimize). Focused on python, you might get somewhat jealous seeing [JuMP.jl](https://github.com/JuliaOpt/JuMP.jl) – sascha Nov 11 '19 at 13:07
  • yeah you're right this is neat, though I still think there is a bit of superfluous syntax but otherwise looks very clean – evan54 Nov 12 '19 at 05:10

1 Answers1

0

Wrote something up that would do this and seems to cover the main issues I had in mind.

The general idea is you define variables and then create a simple expression as you would normally write it out and then the solver class optimises over the defined variables

https://github.com/evan54/optimisation/blob/master/var.py

The example below illustrates a simple use case

# fake data 
a = 2
m = 3 
x = np.linspace(0, 10) 
y = a * x + m + np.random.randn(len(x)) 
a_ = Variable() 
m_ = Variable() 
y_ = a_ * x + m_ 
error = y_ - y 
prob = Problem((error**2).sum(), None) 
prob.minimize() print(f'a = {a}, a_ = {a_}') print(f'm = {m}, m_ = {m_}')
evan54
  • 3,585
  • 5
  • 34
  • 61