2

I've been solving algorithm problems, and I'm a bit confused about the terms.

When we want to calculate prefix sum (or cumulative sum) like the code below, can we say that we are using dynamic programming?

def calc_prefix_sum(nums):
    N = len(nums)
    prefix_sum = [0] * (N + 1)
    for i in range(1, N + 1):
        prefix_sum[i] = prefix_sum[i - 1] + nums[i - 1]
    return prefix_sum

nums = [1, 3, 0, -2, 1]
print(calc_prefix_sum(nums))
[0, 1, 4, 4, 2, 3]

According to the definition in this page,

Dynamic programming is used where we have problems, which can be divided into similar sub-problems so that their results can be re-used.

In my prefix_sum algorithm, the current calculation (prefix_sum[i]) is divided into similar sub-problems (prefix_sum[i - 1] + nums[i - 1]) so that the previous result (prefix_sum[i - 1]) can be re-used. So I am assuming that calculating prefix sum is one of the applications of dynamic programming.

Can I say it's dynamic programming, or should I use different terms? (Especially, I am thinking about the situation in coding interviews.)

Ricky
  • 61
  • 8

3 Answers3

4

No, the correct term is memoization, not dynamic programming. Dynamic programming requires the problem to have optimal substructure as well as overlapping subproblems. Prefix sum has optimal substructure but it does not have overlapping subproblems. Therefore, this optimization should be called memoization.

Yash Gupta
  • 41
  • 2
1

Yes, prefix sums can be considered as a form of Dynamic Programming. It is the simplest way to calculate the sum of a range given a static array by using a prefix array which stores data based on previous sums.

Prefix Sum Array Construction Runtime = O(n)
Prefix Sum Query Runtime = O(1)

0

People often say that Kadane's algorithm is DP, and Kadane's is only 1 if statement away from a prefix sum.

def maxSubArray(nums: List[int]) -> int:
    for i in range(1, len(nums)):
        if nums[i-1] > 0:
            nums[i] = nums[i-1] + nums[i]
    return max(nums)

If you tried to calculate a prefix sum recursively, you would end up with an O(n^2) without memoization but an O(n) algorithm with memoization. This is because of overlapping subproblems.

nums = [1, 3, 0, -2, 1]
def cumsum(i):
    if i < 0:
        return 0
    return nums[i] + cumsum(i-1)
prefix_sum = [cumsum(i) for i in range(len(nums))]

We can see that cumsum(0) is called 5 times, since the recursion must hit the base case before returning, and we call the function 5 times. cumsum(1) is called 4 times, cumsum(2) is called 3 times, and so on.

This is why I would say that prefix sum has both optimal substructure and overlapping subproblems.

MattyC
  • 1
  • 1
  • 1