Printing Combinations
def coin_change_solutions(coins, S):
# create an S x N table for memoization
N = len(coins)
sols = [[[] for n in xrange(N + 1)] for s in xrange(S + 1)]
for n in range(0, N + 1):
sols[0][n].append([])
# fill table using bottom-up dynamic programming
for s in range(1, S+1):
for n in range(1, N+1):
without_last = sols[s][n - 1]
if (coins[n - 1] <= s):
with_last = [list(sol) + [coins[n-1]] for sol in sols[s - coins[n - 1]][n]]
else:
with_last = []
sols[s][n] = without_last + with_last
return sols[S][N]
print coin_change_solutions([1,2], 4)
# => [[1, 1, 1, 1], [1, 1, 2], [2, 2]]
without: we don't need to use the last coin to make the sum. All the coin solutions are found directly by recursing to solution[s][n-1]
. We take all those coin combinations and copy them to with_last_sols
.
with: we do need to use the last coin. So that coin must be in our solution. The remaining coins are found recursively via sol[s - coins[n - 1]][n]
. Reading this entry will give us many possible choices for what the remaining coins should be. For each possible choice , sol
, we append the last coin, coin[n - 1]
:
# For example, suppose target is s = 4
# We're finding solutions that use the last coin.
# Suppose the last coin has a value of 2:
#
# find possible combinations that add up to 4 - 2 = 2:
# ===> [[1,1], [2]]
# then for each combination, add the last coin
# so that the combination adds up to 4)
# ===> [[1,1,2], [2,2]]
The final list of combinations is found by taking the combinations for the first case and the second case and concatenating the two lists.
without_last_sols = [[1,1,1,1]]
with_last_sols = [[1,1,2], [2,2]]
without_last_sols + with_last_sols = [[1,1,1,1], [1,1,2], [2,2]]
Time Complexity
In the worst case we have a coin set with all coins from 1 to n: coins
= [1,2,3,4,...,n] – the number of possible coin sum combinations, num solutions, is equal to the number of integer partitions of s, p(s).
It can be shown that the number of integer partitions, p(s) grows exponentially.
Hence num solutions = p(s) = O(2^s). Any solution must have this at a minimum so that it can print out all these possible solutions. Hence the problem is exponential in nature.
We have two loops: one loop for s and the other loop for n.
For each s and n, we compute sols[s][n]
:
- without: We look at the O(2^s) combinations in
sol[s - coins[n - 1]][n]
. For each combination, we copy it in O(n) time. So overall this takes: O(n×2^s) time.
- with: We look at all O(2^s) combinations in
sol[s][n]
. For each combination list sol
, we create copy of that new list in O(n) time and then append the last coin. Overall this case takes O(n×2^s).
Hence the time complexity is O(s×n)×O(n2^s + n2^s) = O(s×n^2×2^s).
Space Complexity
The space complexity is O(s×n^2×2^s) because we have a s×n table with
each entry storing O(2^s) possible combinations, (e.g. [[1, 1, 1, 1], [1, 1, 2], [2, 2]]
), with each combination, (e.g. [1,1,1,1]
) taking O(n) space.