I'm trying to move an optimization job that I usually do in Excel to a Python script. Basically I have an array of numbers (deals) that in some combination add to the variable (target). The way I find this combination in Excel is to have another array that is binary, which I sum the product of both of the arrays to get the dummy cell. My objective function is then to subtract dummy from the target. Finally I use solver to try get the objective function to zero by changing the binary array. Bellow is the example of the code I have tried so far, and some snippets of the Excel I use running solver.
Really appreciate any help in this, have watched countless videos on Scipy, and I'm pretty stuck.
import numpy as np
from scipy.optimize import minimize, LinearConstraint
deals = np.array([11359992.5, 45892294.4, 10963487.39, 54817436.94, 43853949.55,
39352270.93, 51792041.32, 51809259.82, 25913243.16,
51671721.32,50836797.35])
target = 102508518.67
x = np.arange(len(deals))
constraint = LinearConstraint(x,lb=0,ub=1)
dummy = np.dot(deals,x)
objective_function = dif - dummy
res = minimize(fun = objective_function, x0=0, constraints=constraint)
Snippet of the Excel before running Solver
Snippet of the Excel after running Solver