2

I am using pulp to create an allocator function which packs the items in the trucks based on the weight and volume. It works fine(takes 10-15 sec) for 10-15 items but when I double the items it takes more than half hour to solve it.

def allocator(item_mass,item_vol,truck_mass,truck_vol,truck_cost, id_series):
    n_items = len(item_vol)
    set_items = range(n_items)
    n_trucks = len(truck_cost)
    set_trucks = range(n_trucks)

    print("working1")

    y = pulp.LpVariable.dicts('truckUsed', set_trucks,
        lowBound=0, upBound=1, cat=LpInteger)

    x = pulp.LpVariable.dicts('itemInTruck', (set_items, set_trucks), 
        lowBound=0, upBound=1, cat=LpInteger)
    print("working2")

    # Model formulation
    prob = LpProblem("Truck allocation problem", LpMinimize)

    # Objective
    prob += lpSum([truck_cost[i] * y[i] for i in set_trucks])
    print("working3")
    # Constraints
    for j in set_items:
        # Every item must be taken in one truck
        prob += lpSum([x[j][i] for i in set_trucks]) == 1

    for i in set_trucks:
        # Respect the mass constraint of trucks
        prob += lpSum([item_mass[j] * x[j][i] for j in set_items]) <= truck_mass[i]*y[i]

        # Respect the volume constraint of trucks
        prob += lpSum([item_vol[j] * x[j][i] for j in set_items]) <= truck_vol[i]*y[i]
    print("working4")
    # Ensure y variables have to be set to make use of x variables:
    for j in set_items:
        for i in set_trucks:
            x[j][i] <= y[i]
    print("working5")

    s = id_series          #id_series

    prob.solve()

    print("working6")

This is the data i am running it on

items:

   Name  Pid  Quantity  Length  Width  Height  Volume  Weight     t_type 
0     A    1         1    4.60   4.30     4.3   85.05    1500       Open   
1     B    2         1    4.60   4.30     4.3   85.05    1500       Open   
2     C    3         1    6.00   5.60     9.0  302.40   10000  Container   
3     D    4         1    8.75   5.60     6.6  441.00    1000       Open   
4     E    5         1    6.00   5.16     6.6  204.33    3800       Open   
5     C    6         1    6.00   5.60     9.0  302.40   10000        All   
6     C    7         1    6.00   5.60     9.0  302.40   10000  Container   
7     D    8         1    8.75   5.60     6.6  441.00    6000       Open   
8     E    9         1    6.00   5.16     6.6  204.33    3800       Open   
9     C   10         1    6.00   5.60     9.0  302.40   10000        All   
.... times 5

trucks(this just the top 5 rows, I have 54 types of trucks in total):

  Category       Name  TruckID  Length(ft)  Breadth(ft)  Height(ft)   Volume  \
0      LCV  Tempo 407        0         9.5          5.5         5.5  287.375   
1      LCV  Tempo 407        1         9.5          5.5         5.5  287.375   
2      LCV  Tempo 407        2         9.5          5.5         5.5  287.375   
3      LCV    13 Feet        3        13.0          5.5         7.0  500.500   
4      LCV    14 Feet        4        14.0          6.0         6.0  504.000   

   Weight  Price  
0    1500      1  
1    2000      1  
2    2500      2  
3    3500      3  
4    4000      3  

where ItemId is this:

data["ItemId"] = data.index + 1
id_series = data["ItemId"].tolist()
Rahul Sharma
  • 2,187
  • 6
  • 31
  • 76
  • 1
    Can I advise you to look into the subject of 'O-Notation'? It describes things like this. having two nested for-loops is O(n^2), meaning that which each new element it grows exponentially, probably explainig why it takes so long – bv_Martn May 06 '19 at 07:20
  • I will look into it, thnaks – Rahul Sharma May 06 '19 at 07:25
  • 2
    PuLP supports different MIP solvers. Try a different (better) MIP solver. The difference in performance between MIP solvers can be dramatic. – Erwin Kalvelagen May 06 '19 at 08:41
  • @ErwinKalvelagen different MIP solvers, such as ? Could you please name a few I should look into – Rahul Sharma May 06 '19 at 09:09
  • 1
    Try CPLEX and Gurobi, if you have access to licenses for them. (I think both offer academic licenses for their `pulp` solvers, if that helps.) Also, I will note that it is common for solve times for IPs to increase quickly as the problem size increases. It doesn't always happen as dramatically as what you are describing, and one should certainly look for ways to improve it (better solvers, tighter formulations, etc.), but to some extent this is unavoidable. – LarrySnyder610 May 06 '19 at 14:39
  • Also, this link can be useful: https://stackoverflow.com/questions/51084738/what-is-the-run-time-complexity-of-integer-linear-programming-ilp – kur ag May 11 '19 at 06:29

1 Answers1

2

PuLP can handle multiple solvers. See what ones you have with:

pulp.pulpTestAll()

This will give a list like:

Solver pulp.solvers.PULP_CBC_CMD unavailable.
Solver pulp.solvers.CPLEX_DLL unavailable.
Solver pulp.solvers.CPLEX_CMD unavailable.
Solver pulp.solvers.CPLEX_PY unavailable.
    Testing zero subtraction
    Testing continuous LP solution
    Testing maximize continuous LP solution
    ...
* Solver pulp.solvers.COIN_CMD passed.
Solver pulp.solvers.COINMP_DLL unavailable.
    Testing zero subtraction
    Testing continuous LP solution
    Testing maximize continuous LP solution
    ...
* Solver pulp.solvers.GLPK_CMD passed.
Solver pulp.solvers.XPRESS unavailable.
Solver pulp.solvers.GUROBI unavailable.
Solver pulp.solvers.GUROBI_CMD unavailable.
Solver pulp.solvers.PYGLPK unavailable.
Solver pulp.solvers.YAPOSIB unavailable.

You can then solve using, e.g.:

lp_prob.solve(pulp.COIN_CMD())

Gurobi and CPLEX are commercial solvers that tend to work quite well. Perhaps you could access them? Gurobi has a good academic license.

Alternatively, you may wish to look into an approximate solution, depending on your quality constraints.

Richard
  • 56,349
  • 34
  • 180
  • 251