I have a recursive DFS algorithm that correctly counts the amount of subset sums there are. However, the run time for this method is absurd and extremely exponential. For example, when arr
contains the set below. The sum we are looking for is 50. The arr
has all duplicates and numbers greater or equal to 50 removed. The array is then sorted.
21 3 42 10 13 17 33 26 19 7 11 30 24 2 5
arr
contains the list of words in sorted order
k
is the intial size of the array
sum
is the sum we are looking for in the subsets, in this example its 50
public static void recDfs(ArrayList<Integer> arr, int k, int sum) {
if (sum == 0) {
counter++;
return;
}
if (sum != 0 && k == 0) {
return;
}
recDfs(arr, k - 1, sum - arr.get(k - 1));
recDfs(arr, k - 1, sum);
}
This will give the correct result extremely quickly, which is posted below
Time elapsed: = 0.004838There are 51 amount of subsets that sum 50 BUILD SUCCESSFUL (total time: 0 seconds)
However, this algorithm increases exponentially when we have a new set in the array such as.
99 49 1 7 23 83 72 6 202 78 26 79 351 34 107 76 38 50 32 62 71 9 101 77 81 92 89 66 97 57 33 75 68 93 100 28 42 59 29 14 122 24 60 2 37 192 73 84 31 4 87 65 19
When we call the recDfs
again with the new array that is also sorted and duplicates removed with the sum being 107 the run time is absurd however the correct amount of subsets is printed.
Time elapsed: = 19853.771050There are 1845 amount of subsets that sum 107 BUILD SUCCESSFUL (total time: 330 minutes 54 seconds)
I am looking for better ways to implement this algorithm.