4

I'm looking in to a kind-of bin-packing problem, but not quite the same. The problem asks to put n items into minimum number of bins without total weight exceeding capacity of bins. (classical definition)

The difference is: Each item has a weight and bound, and the capacity of the bin is dynamically determined by the minimum bound of items in that bin.

E.g., I have four items A[11,12], B[1,10], C[3,4], D[20,22] ([weight,bound]). Now, if I put item A into a bin, call it b1, then the capacity of b1 become 12. Now I try to put item B into b1, but failed because the total weight is 11+1 =12, and the capacity of b1 become 10, which is smaller than total weight. So, B is put into bin b2, whose capacity become 10. Now, put item C into b2, because the total weight is 1+3 =4, and the capacity of b2 become 4.

I don't know whether this question has been solved in some areas with some name. Or it is a variant of bin-packing that has been discussed somewhere. I don't know whether this is the right place to post the question, any helps are appreciated!

Anony-mouse
  • 2,041
  • 2
  • 11
  • 23
  • Well, it's a definetly a generalization of bin-packing, by setting `Bound[i] = CONST` for all `i`, and you get classic bin-packing. (The reduction from binpacking pretty much follows the above) – amit Jun 14 '15 at 07:46
  • 1
    I wish I hadn't seen your question. Now I can't stop thinking about this problem. Interesting! – stakx - no longer contributing Jun 14 '15 at 08:06

4 Answers4

3

Usually with algorithm design for NP-hard problems, it's necessary to reuse techniques rather than whole algorithms. Here, the algorithms for standard bin packing that use branch-and-bound with column generation carry over well.

The idea is that we formulate an enormous set cover instance where the sets are the sets of items that fit into a single bin. Integer programming is a good technique for normal set cover, but there are so many sets that we need to do something else, i.e., column generation. There is a one-to-one correspondence between sets and columns, so we rip out the part of the linear programming solver that uses brute force to find a good column to enter and replace it with a solver for what turns out to be the knapsack analog of this problem.

This modified knapsack problem is, given items with weights, profits, and bounds, find the most profitable set of items whose total weight is less than the minimum bound. The dynamic program for solving knapsack with small integer weights happily transfers over with no loss of efficiency. Just sort the items by descending bounds; then, when forming sets involving the most recent item, the weight limit is just that item's bound.

David Eisenstat
  • 64,237
  • 7
  • 60
  • 120
1

The following is based on Anony-mouse's answer. I am not an algorithm expert, so consider the following as "just my two cents", for what they are worth.

I think Anony-mouse is correct in starting with the smallest items (by bound). This is because a bin tends to get smaller in capacity the more items you add to it; a bin's maximum capacity is determined with the first item placed in it, it can never get larger after that point.

So instead of starting with a large bin and have its capacity slowly reduced, and having to worry about taking out too-large items that previously fit, let's jut try to keep bins' capacities as constant as possible. If we can keep the bins' capacities stable, we can use "standard" algorithms that know nothing about "bound".

So I'd suggest this:

  1. Group all items by bound.

    This will allow you to use a standard bin packing algorithm per group because if all items have the same bound (i.e. bound is constant), it can essentially be disregarded. All that the bound means now is that you know the resulting bins' capacity in advance.

  2. Start with the group with the smallest bound and perform a standard bin packing for its items.

    This will result in 1 or more bins that have a capacity equal to the bound of all items in them.

  3. Proceed with the item group having the next-larger bound. See if there are any items that could still be put in an already existing bin (i.e. a bin produced by the previous steps).

    Note that bound can again be ignored; since all pre-existing bins already have a smaller capacity than these additional items' bound, the bins' capacity cannot be affected; only weight is relevant, so you can use "standard" algorithms.

    I suspect this step is an instance of the (multiple) knapsack problem, so look towards knapsack algorithms to determine how to distribute these items over and into the pre-existing, partially filled bins.

  4. It's possible that the item group from the previous group has only been partially processed, there might be items left. These will go into one or more new bins: Basically, repeat step 3.

  5. Repeat the above steps (from 3 onwards) until no more items are left.

Community
  • 1
  • 1
stakx - no longer contributing
  • 83,039
  • 20
  • 168
  • 268
  • I like your idea a lot, both from you and mouse. In fact, I'm now trying to find the performance bound of it. Recall that, First-fit-decreasing (FFD) has an absolute approximate ratio of 3/2 compared to optimal. But it seems that for my problem, the similar idea could be arbitrary bad (i.e., no guarantee on approximate-ratio). – Shuhao Zhang tony Jun 17 '15 at 16:32
  • @ShuhaoZhangtony Have you got a better solution ? – Anony-mouse Jun 19 '15 at 13:46
  • The way to compare heuristic method is its runtime complexity and approximate bound. As I said, since there's no bound given from your proposed solution, there's no way to compare it with any other solution. But, here's my initial solution. Instead of sort items by bound, sort them by (bound - weight), that is, sort them according to how much space are left. --> this essentially follows the initial idea of FFD. However, again, I failed to proof the approximate bound for it. So I can't say much about it. – Shuhao Zhang tony Jun 20 '15 at 02:30
1

It can still be written as an ILP instance, like so:

Make a binary variable x_{i,j} signifying whether item j goes into bin i, helper variables y_i that signify whether bin i is used, helper variables c_i that determine the capacity of bin i, and there are constants s_j (size of item j) b_j (bound of item j) and M (a large enough constant), now

minimize sum[j] y_j

subject to:
1:   for all j:
         (sum[i] x_{i,j}) = 1
2:   for all i,j:
         y_i ≥ x_{i,j}
3:   for all i:
         (sum[j] s_j * x_{i,j}) ≤ c_i
4:   for all i,j:
         c_i ≤ b_j + (M - M * x_{i,j})
5:   x_{i,j} ϵ {0,1}
6:   y_i ϵ {0,1}

The constraints mean

  1. any item is in exactly one bin
  2. if an item is in a bin, then that bin is used
  3. the items in a bin do not exceed the capacity of that bin
  4. the capacity of a bin is no more than the lowest bound of the items that are in it (the thing with the big M prevents items that are not in the bin from changing the capacity, provided you choose M no less than the highest bound)
  5. and 6., variables are binary.

But the integrality gap can be atrocious.

harold
  • 61,398
  • 6
  • 86
  • 164
  • Sure, it can be solved by linear programming solvers. In fact, those solvers are implemented based on some strategy, i.e., branch and bound. But even with best solver, NP-hard problem with *sufficiently* large instance size cannot be solved. I am actually more interested in whether the proposed question has been studied by someone else before or not. – Shuhao Zhang tony Jun 17 '15 at 16:26
  • @ShuhaoZhangtony I hadn't heard of it before, that doesn't really mean much I suppose.. It's clearly related to "normal bin-packing", some results may transfer (for example, the result that the integrality gap of the "normal" version of the above model is in the worst case linear in the number of items will almost certainly transfer, at least I see no reason why it wouldn't). – harold Jun 17 '15 at 16:41
0

First of all i might be totally wrong and there might exist an algorithm that is even better than mine.

Bin packing is NP-hard and is efficiently solved using classic algorithms like First Fit etc.There are some improvements to this too.Korf's algorithm

I aim to reduce this to normal bin packing by sorting the items by thier bound.The steps are

  1. Sort items by bound :Sorting items by bound will help us in arranging the bins as limiting condition is minimum of bound.

  2. Insert smallest item(by bound) into a bin

  3. Check whether the next item(sorted by bound) can coexist in this bin.If it can then keep the item in the bin too.If not then try putting it in another bin or create another bin for it.
  4. Repeat the procedure till all elements are arranged. The procedure is repeated in ascending order of bounds.

I think this pretty much solves the problem.Please inform me if it doesn't.I am trying to implement the same.And if there are any suggestions or improvements inform me that too. :) Thank you

Anony-mouse
  • 2,041
  • 2
  • 11
  • 23
  • `Bin packing is NP-hard and is efficiently solved using classic algorithms like First Fit etc` wat? – amit Jun 14 '15 at 10:05
  • I was just elaborating on traditional algorithms using which we can solve bin packing – Anony-mouse Jun 14 '15 at 13:17
  • Thanks. Same comments to stakx here. Give an approximate solution is easy, proof the approximate bound is hard. I know my original question does not ask for it, but since you guys seems really interested in it, you might want to work deeper on it. – Shuhao Zhang tony Jun 17 '15 at 16:36