0

I'm trying a simple optimization concerning cooking recipes. The peculiarity is that the system will have to optimize several recipes at the same time due to constraints that will be created in common not yet implemented (e.g .: a core common to all the recipes).

Below is the error that shows scipy:

         fun: 3.467601477010358
     jac: array([1.79999998, 2.04999998, 1.79999998, 2.04999998])
 message: 'Singular matrix C in LSQ subproblem'
    nfev: 6
     nit: 1
    njev: 1
  status: 6
 success: False
       x: array([0.99425684, 0.3895346 , 0.31931526, 0.14859794])

Here the code:

import sys

import numpy as np
from scipy.optimize import minimize

ingredient = dict()
ingredient[0] = dict()
ingredient[0]['description'] = 'Corn'
ingredient[0]['price'] = 180

ingredient[1] = dict()
ingredient[1]['description'] = 'Rice'
ingredient[1]['price'] = 205

ingredient_count = len(ingredient)

product = dict()
product[0] = dict()
product[0]['description'] = "Bread 1"
product[0]['ingredient'] = dict()
product[0]['ingredient'][0] = dict()
product[0]["ingredient"][0]['min'] = 0
product[0]["ingredient"][0]['max'] = 36
product[0]["ingredient"][1] = dict()
product[0]["ingredient"][1]['min'] = 0
product[0]["ingredient"][1]['max'] = 100

product[1] = dict()
product[1]["description"] = "Bread 2"
product[1]['ingredient'] = dict()
product[1]['ingredient'][0] = dict()
product[1]["ingredient"][0]['min'] = 0
product[1]["ingredient"][0]['max'] = 36
product[1]["ingredient"][1] = dict()
product[1]["ingredient"][1]['min'] = 0
product[1]["ingredient"][1]['max'] = 100

product_count = len(product)


def function(x):
    totals = list()
    for product_index in product:
        total = 0
        for ingredient_index in product[product_index]['ingredient']:
            x_index = (ingredient_count * product_index) + ingredient_index
            total += (x[x_index] * ingredient[ingredient_index]['price'] / 100)
        totals.append(total)
    return totals


def function_sum(x):
    # p1 + p2 + ... + pn
    totals = function(x)
    return sum(totals)


def function_diff_sum(x):
    # p_min + (p1 - p_min) + (p2 - p_min) + ... + (pn - p_min)
    totals = function(x)
    min_total = min(totals)
    grand_total = min_total
    for total in totals:
        grand_total += total - min_total
    return grand_total

# Constraints


constraints = list()


def populate_constraints():
    for product_index in product:
        total_of_ingredients(product_index)
        for ingredient_index in product[product_index]['ingredient']:
            for constraint_type in product[product_index]['ingredient'][ingredient_index]:
                if constraint_type == 'min':
                    min_for_ingredient(product_index, ingredient_index)
                elif constraint_type == 'max':
                    max_for_ingredient(product_index, ingredient_index)


def min_for_ingredient(product_index, ingredient_index):
    x_index = (ingredient_count*product_index)+ingredient_index
    constraints.append({'type': 'ineq', 'fun': lambda x: x[x_index] - product[product_index]['ingredient'][ingredient_index]['min']})


def max_for_ingredient(product_index, ingredient_index):
    x_index = (ingredient_count*product_index) + ingredient_index
    constraints.append({'type': 'ineq', 'fun': lambda x: product[product_index]['ingredient'][ingredient_index]['max'] - x[x_index]})


def total_of_ingredients(product_index):
    # The total of ingredient for a product have to be 100.
    first_x_index = ingredient_count*product_index
    constraints.append({'type': 'eq', 'fun': lambda x: (sum(x[i] for i in range(first_x_index, ingredient_count)))-100})


def main():
    x0 = np.random.rand(1, product_count*ingredient_count)
    # x0 = np.array([36, 64, 36, 64])  # The solution that the optimizer should give
    populate_constraints()

    # There are 2 type of sum
    fun = function_sum
    # fun = function_diff_sum

    res = minimize(fun, x0, constraints=constraints)
    print(res)


if __name__ == '__main__':
    sys.exit(main())
fuglede
  • 17,388
  • 2
  • 54
  • 99
AnLa
  • 29
  • 3
  • If you are trying to solve some linear-optimization problem using general nonlinear-optimization solvers (i'm too lazy to check the code out in detail; but it looks like that): don't. There is scipy's linprog. – sascha Dec 07 '19 at 13:51

1 Answers1

1

If I'm reading your question correctly, you are trying to solve the linear programming problem of minimizing

1.8*i_11 + 2.05*i_12 + 1.8*i_21 + 2.05*i_22

with the constraints

i_11 + i_12 = 100
i_21 + i_22 = 100
0 <= i_11 <= 36
0 <= i_12 <= 100
0 <= i_21 <= 36
0 <= i_22 <= 100

Ordering the variables as (i_11, i_12, i_21, i_22) and referring to its documentation for the details on the notation, this is straightforward with scipy.optimize.linprog:

c = [1.8, 2.05, 1.8, 2.05]
A_eq = [[1, 1, 0, 0], [0, 0, 1, 1]]
b_eq = [100, 100]
bounds = [(0, 36), (0, 100), (0, 36), (0, 100)]
output = linprog(c=c, A_eq=A_eq, b_eq=b_eq, bounds=bounds)

Then, output is the following OptimizeResult:

     con: array([3.13126236e-09, 3.13126236e-09])
     fun: 391.9999999946266
 message: 'Optimization terminated successfully.'
     nit: 5
   slack: array([], dtype=float64)
  status: 0
 success: True
       x: array([35.99999999, 64.00000001, 35.99999999, 64.00000001])
fuglede
  • 17,388
  • 2
  • 54
  • 99