25
Initialize:
    max_so_far = 0
    max_ending_here = 0

Loop for each element of the array
   (a) max_ending_here = max_ending_here + a[i]
   (b) if(max_ending_here < 0)
         max_ending_here = 0
   (c) if(max_so_far < max_ending_here)
          max_so_far = max_ending_here
 return max_so_far

Can anyone help me in understanding the optimal substructure and overlapping problem(bread and butter of DP) i the above algo?

Bhavish Agarwal
  • 663
  • 7
  • 13
  • 4
    Kadane's algorithm is greedy, IIRC. – nhahtdh May 01 '13 at 18:25
  • 3
    +1, I've been struggling with this myself. I can't decide if it counts as DP or not: we have optimal substructure, but no overlapping subproblems. I've seen it labeled as DP however, but strictly speaking, I'd say it isn't. – IVlad May 01 '13 at 18:26
  • Can't image someone has the same question as I have;) – Eric Z Oct 17 '15 at 03:03
  • "Kadane's algorithm is greedy?". That is too far from my understanding. A hallmark of greedy algorithm is that at the end of algorithm, the actual solution, which in the current case is the subarray that attains the maximum sum should have been computed explicitly, since, "the choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the subproblem", quoted from [Wikipedia article on greedy algorithm](https://en.wikipedia.org/wiki/Greedy_algorithm). – burnabyRails Oct 22 '20 at 20:45
  • But that's the thing, Kadane's algorithm does not depend on all the solutions to the subproblems. It picks the local optimum at every step. – IVlad Oct 22 '20 at 23:04
  • @IVlad, the final answer given in Kadane's algorithm does depend on all the solutions to the subproblems, where each subproblem is to find the maximum sum of an array that ends at a particular index. The final answer is the maximum of all answers to the subproblems. – burnabyRails Oct 23 '20 at 03:24
  • @burnabyRails that does not make an algorithm a DP algorithm. The same can be said for all the greedy algorithms as well: Dijkstra's depends in the end on all the solutions to the subproblems too. If you extend the definition this much, it will apply to any greedy algorithm too, and it loses its purpose. – IVlad Oct 26 '20 at 20:59
  • @IVlad, I do not think "the same can be said for all the greedy algorithms". Are you able to show how you can classify the classical greedy algorithm, Kruskal's algorithm as DP? – burnabyRails Oct 26 '20 at 21:40
  • @burnabyRails I don't know if I am, but since you can implement anything recursively and then apply similar logic to what I used for Dijkstra's, I'd wager that it's possible. Anyway, will it really make a difference in the discussion we're having if I change "all the greedy [...]" to "other greedy [...]"? – IVlad Oct 26 '20 at 22:45
  • @burnabyRails consider exponentiation by squaring where you usually do something like `t = pow(x, n/2); return t*t;`. If I instead do `return pow(x,n/2)*pow(x,n/2)`, is this DP or just me being silly for not storing the return value of the recursive call? – IVlad Oct 26 '20 at 22:53
  • @IVlad, if you would like to ask a question, please do. – burnabyRails Oct 27 '20 at 00:25

2 Answers2

25

According to this definition of overlapping subproblems, the recursive formulation of Kadane's algorithm (f[i] = max(f[i - 1] + a[i], a[i])) does not exhibit this property. Each subproblem would only be computed once in a naive recursive implementation.

It does however exhibit optimal substructure according to its definition here: we use the solution to smaller subproblems in order to find the solution to our given problem (f[i] uses f[i - 1]).

Consider the dynamic programming definition here:

In mathematics, computer science, and economics, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems1 and optimal substructure (described below). When applicable, the method takes far less time than naive methods that don't take advantage of the subproblem overlap (like depth-first search).

The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations

This leaves room for interpretation as to whether or not Kadane's algorithm can be considered a DP algorithm: it does solve the problem by breaking it down into easier subproblems, but its core recursive approach does not generate overlapping subproblems, which is what DP is meant to handle efficiently - so this would put it outside DP's specialty.

On the other hand, you could say that it is not necessary for the basic recursive approach to lead to overlapping subproblems, but this would make any recursive algorithm a DP algorithm, which would give DP a much too broad scope in my opinion. I am not aware of anything in the literature that definitely settles this however, so I wouldn't mark down a student or disconsider a book or article either way they labeled it.

So I would say that it is not a DP algorithm, just a greedy and / or recursive one, depending on the implementation. I would label it as greedy from an algorithmic point of view for the reasons listed above, but objectively I would consider other interpretations just as valid.

Community
  • 1
  • 1
IVlad
  • 43,099
  • 13
  • 111
  • 179
  • 1
    It is also interesting that it only involves two elements of storage. This again makes it feel less like a typical DP algorithm. Do you know of any other algorithms that are thought of as DP with so little storage required? – Peter de Rivaz May 01 '13 at 20:58
  • 6
    @PeterdeRivaz - the fibonacci recurrence would count: it has optimal substructure and overlapping subproblems and can also be implemented with `O(1)` memory. – IVlad May 01 '13 at 21:56
  • Please check [Google search for "Kadane's algorithm dynamic programming"](https://www.google.com/search?q=kadane%27s+algorithm+dynamic+programming). – burnabyRails Oct 22 '20 at 21:21
  • 1
    I'm not sure what your point is. I've given references to literature definitions, do you have any published scientific or teaching material that contradicts what I said? Other people saying different things is not really a counter argument as long as they don't address my arguments specifically. – IVlad Oct 22 '20 at 23:01
3

Note that I derived my explanation from this answer. It demonstrates how Kadane’s algorithm can be seen as a DP algorithm which has overlapping subproblems.

Identifying subproblems and recurrence relations

Imagine we have an array a from which we want to get the maximum subarray. To determine the max subarray that ends at index i the following recursive relation holds:

max_subarray_to(i) = max(max_subarray_to(i - 1) + a[i], a[i])

In order to get the maximum subarray of a we need to compute max_subarray_to() for each index i in a and then take the max() from it:

max_subarray = max( for i=1 to n max_subarray_to(i) )

Example

Now, let's assume we have an array [10, -12, 11, 9] from which we want to get the maximum subarray. This would be the work required running Kadane's algorithm:

result = max(max_subarray_to(0), max_subarray_to(1), max_subarray_to(2), max_subarray_to(3))

max_subarray_to(0) = 10  # base case
max_subarray_to(1) = max(max_subarray_to(0) + (-12), -12)
max_subarray_to(2) = max(max_subarray_to(1) + 11, 11)
max_subarray_to(3) = max(max_subarray_to(2) + 9, 49)

As you can see, max_subarray_to() is evaluated twice for each i apart from the last index 3, thus showing that Kadane's algorithm does have overlapping subproblems.

Kadane's algorithm is usually implemented using a bottom up DP approach to take advantage of the overlapping subproblems and to only compute each subproblem once, hence turning it to O(n).

Andi
  • 8,154
  • 3
  • 30
  • 34
  • Dijkstra's algorithm can be written recursively too, if you really want to (see for example: http://thoughtoverflow.com/dsa/dijkstra-shortest-path-recursive.html - I'm sure someone more skilled than me can come up with a recursive formula too in less time than I'm willing to spend on it). Now consider finding the longest path out of all shortest paths starting from a source node: it would involve taking the max of dijkstra(node_1), dijkstra(node_2) etc. If you write it like you did, we would have overlapping subproblems here too. So would this be DP? – IVlad Oct 26 '20 at 21:09
  • If you do it that way you can force any problem to be DP, but to me it seems against the spirit of the DP definition to do as much as possible in a single function just to force recursive calls to generate overlapping subproblems. Consider for example f(i) = f(i-1) + 1, g(i) = max(g(i-1), f(i)). Is this DP? f(i) computes f(i-1) and so does g(i-1). It fits the definition, yes. But it's a mess, and it's your fault for abusing recursion like that. f itself is not DP, and you might as well treat computing f as a separate problem. – IVlad Oct 26 '20 at 21:21
  • @IVlad, [the wikipedia article on Dijkstra's algorithm](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) classifies Dijkstra's algorithm as "Dynamic programming". Of course, Kadane's algorithm is widely regarded as dynamic programming (as well as greedy algorithm). – burnabyRails Oct 26 '20 at 21:40
  • @burnabyRails that classification is marked as controversial even on wikipedia. CLRS (first citation on the wiki page) classifies it as greedy. – IVlad Oct 26 '20 at 22:35
  • 1
    @IVlad you make some very good points in regards to forcing a problem to be DP to retrieve overlapping subproblems. I understand why this could be argued for my answer here, and why your answer therefore seems more correct. Thanks for the feedback. – Andi Oct 27 '20 at 00:04