4

I have a question about the coin change problem where we not only have to print the number of ways to change $n with the given coin denominations for eg {1,5,10,25}, but also print the ways

For example if the target = $50, and the coins are {1,5,10,25}, then the ways to actually get use the coins to get the target are

  • 2 × $25
  • 1 × $25 + 2 × $10 + 1 × $5
  • etc.

What is the best time complexity we could get to solve this problem? I tried to modify the dynamic programming solution for the coin change problem where we only need the number of ways but not the actual ways

I am having trouble figuring out the time complexity. I do use memorization so that I don't have to solve the same problem again for the given coin and sum value but still we need to iterate through all the solution and print them. So the time complexity is definitely more than O(ns) where n is the number of coins and s is the target Is it exponential? Any help will be much appreciated

ruakh
  • 175,680
  • 26
  • 273
  • 307
user12331
  • 486
  • 7
  • 22
  • When I type "Coin Problem complexity" into my favorite search engine, I get a result that contains a possible dynamic programming solution along with an information about the time complexity of the problem. Printing your result should typically be within the same upper complexity bound as finding the result. So if you find the result within O(ns), then you should be able to print it within O(ns) unless you chose a print representation that is more complex than the actual result. So you have to specify the output format before we talk about output complexity. – grek40 Aug 21 '17 at 10:59
  • You are looking for a proof why it is exponential? – maraca Aug 24 '17 at 18:26
  • 1
    @grek40: Re: "Printing your result should typically be within the same upper complexity bound as finding the result": Sure, but the complexity of finding and printing *k* different values will frequently be *much* higher than the complexity of simply computing *k*. – ruakh Aug 24 '17 at 19:02
  • `ways to change` *`$n`*, *`n`* `is the number of coins`, *`s`* `is the target` I wish *n* didn't change meaning at all, but: is that *number of different denominations*, *number of coins* (available), *per denomination*, or something else, entirely? – greybeard Aug 24 '17 at 20:38
  • @user12331 I changed the algorithm in my answer. I hope that it makes the reasoning easier to follow. – Rahav Aug 25 '17 at 06:16

3 Answers3

1

Printing Combinations

def coin_change_solutions(coins, S):
  # create an S x N table for memoization
  N = len(coins)
  sols = [[[] for n in xrange(N + 1)] for s in xrange(S + 1)]
  for n in range(0, N + 1):
    sols[0][n].append([])

  # fill table using bottom-up dynamic programming
  for s in range(1, S+1):
    for n in range(1, N+1):
      without_last = sols[s][n - 1]
      if (coins[n - 1] <= s):
          with_last = [list(sol) + [coins[n-1]] for sol in sols[s - coins[n - 1]][n]]
      else:
          with_last = []
      sols[s][n] = without_last + with_last

  return sols[S][N]


print coin_change_solutions([1,2], 4)
# => [[1, 1, 1, 1], [1, 1, 2], [2, 2]]
  • without: we don't need to use the last coin to make the sum. All the coin solutions are found directly by recursing to solution[s][n-1]. We take all those coin combinations and copy them to with_last_sols.

  • with: we do need to use the last coin. So that coin must be in our solution. The remaining coins are found recursively via sol[s - coins[n - 1]][n]. Reading this entry will give us many possible choices for what the remaining coins should be. For each possible choice , sol, we append the last coin, coin[n - 1]:

    # For example, suppose target is s = 4
    # We're finding solutions that use the last coin.
    # Suppose the last coin has a value of 2:
    #
    # find possible combinations that add up to 4 - 2 = 2: 
    # ===> [[1,1], [2]] 
    # then for each combination, add the last coin 
    # so that the combination adds up to 4)
    # ===> [[1,1,2], [2,2]]
    

The final list of combinations is found by taking the combinations for the first case and the second case and concatenating the two lists.

without_last_sols = [[1,1,1,1]]
with_last_sols = [[1,1,2], [2,2]]
without_last_sols + with_last_sols = [[1,1,1,1], [1,1,2], [2,2]]

Time Complexity

In the worst case we have a coin set with all coins from 1 to n: coins = [1,2,3,4,...,n] – the number of possible coin sum combinations, num solutions, is equal to the number of integer partitions of s, p(s). It can be shown that the number of integer partitions, p(s) grows exponentially.
Hence num solutions = p(s) = O(2^s). Any solution must have this at a minimum so that it can print out all these possible solutions. Hence the problem is exponential in nature.

We have two loops: one loop for s and the other loop for n.
For each s and n, we compute sols[s][n]:

  • without: We look at the O(2^s) combinations in sol[s - coins[n - 1]][n]. For each combination, we copy it in O(n) time. So overall this takes: O(n×2^s) time.
  • with: We look at all O(2^s) combinations in sol[s][n]. For each combination list sol, we create copy of that new list in O(n) time and then append the last coin. Overall this case takes O(n×2^s).

Hence the time complexity is O(s×n)×O(n2^s + n2^s) = O(s×n^2×2^s).


Space Complexity

The space complexity is O(s×n^2×2^s) because we have a s×n table with each entry storing O(2^s) possible combinations, (e.g. [[1, 1, 1, 1], [1, 1, 2], [2, 2]]), with each combination, (e.g. [1,1,1,1]) taking O(n) space.

James Lawson
  • 8,150
  • 48
  • 47
  • This is not the most optimal solution. Furthermore, you are also using Python instructions to build the array over and over. While this is one Python instruction, internally it iterates, which adds to the time complexity. – Rahav Aug 26 '17 at 14:00
0

Let d_i be a denomination, the value of a coin in cents. In your example d_i = {1, 5, 10, 25}. Let k be the number of denominations (coins), here k = 4.

We will use a 2D array numberOfCoins[1..k][0..n] to determine the minimum number of coins required to make a change. The optimal solution is given by:

numberOfCoins[k][n] = min(numberOfCoins[i − 1][j], numberOfCoins[i][j − d_i] + 1)

The equation above represents the fact that to build an optimal solution we either do not use d_i, so we need use a smaller coin (this is why i is decremented below):

numberOfCoins[i][j] = numberOfCoins[i − 1][j]   // eq1

or we use d_i, so we add +1 to the number of coins needed and we decrement by d_i (the value of the coin we just used):

numberOfCoins[i][j] = numberOfCoins[i][j − d_i] + 1   // eq2

The time complexity is O(kn) but in cases where k is small, as is the case in your example, we have O(4n) = O(n).

We will use another 2D array, coinUsed, having the same dimensions as numberOfCoins, to mark which coins were used. Each entry will either tell us that we did not use the coin in coinUsed[i][j] by setting a "^" in that position (this correspond to eq1). Or we mark that the coin was used by setting a "<" in that position (corresponding to eq2).

Both arrays can be built as the algorithm is working. We will only have constant more instructions in the inner loop, therefore the time complexity of building both arrays is still O(kn).

To print the solution we need to iterate, in the worse case scenario over k + n+1 elements. For example, when the optimal solution is using all 1 cent denominations. But note that printing is done after building so the overall time complexity is O(kn) + O(k + n+1). As before, if k is small the complexity is O(kn) + O(k + n+1) = O(kn) + O(n+1) = O(kn) + O(n) = O((k+1)n) = O(n).

Rahav
  • 1,755
  • 15
  • 18
  • I fixed a mistake in the last paragraph. – Rahav Aug 25 '17 at 11:04
  • I don't understand how this finds *all* the combinations. From what I can tell, this will only find *one* combination. At the end, once you've build `coinUsed`, you will start at the bottom right and follow each < and ^ until you reach the top left. But this will only give *one* combination right? Can you give an example of the coinUsed table or give some pseudocode? – James Lawson Aug 26 '17 at 14:11
0

What I tend to do is solve the problem recursively and then build a memoization solution from there.

Starting with a recursive one the approach is simple, pick a coin subtract from target and dont pick a coin.

Whilst you pick a coin you add it to a vector or your list, when you dont pick one you pop the one you added before. The code looks something like:

void print(vector<int>& coinsUsed)
{
    for(auto c : coinsUsed)
    {
        cout << c << ",";
    }
    cout << endl;
}

int helper(vector<int>& coins, int target, int index, vector<int>& coinsUsed)
{
    if (index >= coins.size() || target < 0) return 0;

    if (target == 0)
    {
        print(coinsUsed);
        return 1;
    }

    coinsUsed.push_back(coins[index]);
    int with = helper(coins, target - coins[index], index, coinsUsed);

    coinsUsed.pop_back();
    int without = helper(coins, target, index + 1, coinsUsed);

    return with + without;
}

int coinChange(vector<int>& coins, int target)
{
    vector<int> coinsUsed;
    return helper(coins, target, 0, coinsUsed);
}

You can call it like:

vector<int> coins = {1,5,10,25};
cout << "Total Ways:" << coinChange(coins, 10);

So this gives you the total ways and also the coins used in the process to reach the target stored in coinsUsed you can now memoize this as you please by storing the passed in values in a cache.

The time complexity of the recursive solution is exponential.

link to the running program: http://coliru.stacked-crooked.com/a/5ef0ed76b7a496fe

Samer Tufail
  • 1,835
  • 15
  • 25
  • This is [backtracking](https://en.wikipedia.org/wiki/Backtracking) not dynamic programming. As you pointed out, there is no memoization (no cache) – so you'll be solving the same subproblems over and over again (this problem *does* have overlapping subproblems). On top of the exponential number of solutions to print out in the leaves of your backtracking tree – you have to deal with an exponential number of recursive calls to reach these leaf nodes. – James Lawson Aug 26 '17 at 10:54
  • @James Lawson I well understand that and I did mention it can be memoized to reach an optimal solution. Usually starting off directly with DP solutions is generally not intuitive until unless you can skillfully define sub-problems. People tend to memorize solutions and then fail when presented with variants of the problem. – Samer Tufail Aug 26 '17 at 11:38