13

Start with an array A of positive numbers. Start at index 0. From index i, you can move to index i+x for any x <= A[i]. The goal is to find the minimum number of moves needed to get to the end of the array.

Here's an example:

{ 2 , 4 , 1 , 2 , 3 , 2 , 4 , 2} 

If you always go as far as possible in each move, then this is what you get:

0 , 2 , 3 , 5 , 7

This takes 4 moves. But you can get through it faster by doing it this way

0 , 1 , 4 , 7

This only takes 3 moves.

I thought about this for a bit and did the first thing I thought of, but after thinking for a few more days, I still don't know how to do it any better.

Here's my idea. Start at the end of the array and keep track of the minimum number of moves from some position to the end. So for the example, moves[7] = 0 because it's the end already. Then moves[6] = 1 because it takes one move to get to the end. My formula is

moves[i] = 1 + min(moves[i+1], moves[i+2], ... , moves[i+A[i]])

By the time I get to the beginning, I know the number of moves.

So this is O(n^2) which is okay I guess, but probably there is a faster way?

Varun Madiath
  • 3,152
  • 4
  • 30
  • 46
Daniel
  • 944
  • 1
  • 7
  • 24
  • 1
    Your algorithm (which uses the so called *dynamic programming*, or *Bellman principle*) is perfectly OK, and probably what was expected from you. Seeing it as a graph problem has the advantage that you can use existing black-box algorithms, but aren't really better. Also, it is not really O(n^2) if eg. the entries of the array are bounded by a number K (it becomes O(n) then). – Alexandre C. Sep 08 '11 at 20:30
  • Is this an old or particularly famous problem? The reason I ask is that, if it is, I independently came up with it about a year ago for a casual contest question. I'm not particularly interested in getting credit, but if something I originated is being used as an interview question at good companies, that would be pretty cool. Note: I understand this is probably a very old problem, and that my independent discovery of it shouldn't come as a shock given how easy it is to formulate/understand. – Patrick87 Sep 08 '11 at 20:49
  • 1
    @Patrick: it is the kind of problems you are given as exercises when you study dynamic programming. Any half serious book on algorithms should have hundreds of them. – Alexandre C. Sep 08 '11 at 20:51
  • @Alexandre: Right, so like I suspected, this is just a toy DP problem. Is it too trivial to have a name? – Patrick87 Sep 08 '11 at 20:52
  • So this is just a puzzle? Because, to know which way you could take, you have to visit every index already, so in the end, you make much more moves, you visit every index, and then some of them again? – user unknown Sep 08 '11 at 20:52
  • 1
    possible duplicate of [Interview puzzle: Jump Game](http://stackoverflow.com/questions/9041853/interview-puzzle-jump-game) – Vitalii Fedorenko Aug 23 '14 at 13:22

10 Answers10

13

Since you can chose any x in [1,A[i]] I guess there is a pretty simple solution :

start at 0:

select the next reachable element from which you can reach the farther element. i.e chose i that maximize i+A[i+x] for x in [1,A[i]]

until you arrive at the end of the list.


Example:

{2 , 4 , 1 , 2 , 3 , 2 , 4 , 2}

start at 0

from 0 you can get to 1 or to 2:

  • from 1 you can get to 4
  • from 2 you can get to 3

therefore max(0+A[0+x]) is for i = 1

chose 1 from 1 you can get to 2 3 4:

  • from 4 you can get to 7
  • from 3 you can get to 5
  • from 2 you can get to 3

therefore max(1+A[1+x]) is for i = 4

chose 4

you can reach 7

stop

the resulting list is : 

0,1,4,7

As explained in my comments I think it's O(N), because from i you reach i+x+1 in at least 2*x operations.


'Pseudo' proof

You start at 0 (it's optimal)

then you select i that maximize(0+A[0+x]) (i.e that maximize the reachability for the next element)

from that i you can reach any other element that is reachable from all other elements reachable from 0 (it's a long sentence, but it means : who can do more, can do less, therefore if i is not optimal,it's as good as optimal)

So i is optimal

then following step by step this reasoning, it proves the optimality of the method.

If someone knows how to formulate that more mathematically, feel free to update it.

Community
  • 1
  • 1
Ricky Bobby
  • 7,490
  • 7
  • 46
  • 63
  • I'm not sure I understand how you're picking the next element. You don't just go as far as you can each time. My example shows that doesn't work. – Daniel Sep 08 '11 at 20:21
  • 1
    Can you prove that this greedy solution always works? I'm skeptical that this is correct in all cases. – templatetypedef Sep 08 '11 at 20:22
  • Also, isn't this also O(n^2) in the worst case? – templatetypedef Sep 08 '11 at 20:27
  • 1
    @Daniel, I take i that maximize(i+A[i+x]), therefore it's not like your example. I will prefere to chose 1 in the second step because from 1 I can get to 4 but from 2 I can only get to 3. (i'm gona add an example) – Ricky Bobby Sep 08 '11 at 20:32
  • An example would be great. Thanks! – Daniel Sep 08 '11 at 20:33
  • @templatetypedef, there is not a lot of constraint in the problem, so I think in work in all cases, but I will try to add a proof. About the complexity, I the worst case would be 1 on each step, and it''s O(n) – Ricky Bobby Sep 08 '11 at 20:34
  • @Ricky Bobby- How can you accomplish each step in O(1)? Don't you need to iterate over (potentially) O(n) entries on each step you take? (You might be able to argue that in an amortized sense it's O(1), but I'm not sure how) – templatetypedef Sep 08 '11 at 20:35
  • I think we may have hit on the same idea, but yours is far better articulated. +1 – PengOne Sep 08 '11 at 20:40
  • @templatetypedef, at each step the number of operations will depend on the farther element you can reach from i. with my algorithm you will be able to reach that element +1 in at least 2 steps. Therefore the maximum operation you will do is 2*N. so O(N) – Ricky Bobby Sep 08 '11 at 20:43
  • @PengOne, thx I give you a +1 for the support too :D. I'm having hard time proving such a simple greedy solution is optimal but i'm almost sure it's O(n) – Ricky Bobby Sep 08 '11 at 20:50
  • I'm not sure it is correct. Maybe we should look for a counterexample. – Alexandre C. Sep 08 '11 at 20:58
  • @Alexandre C. at each step I chose an element such that you can do better or equal as chosing any other element. but I'm looking for a better proof that the above sentence. – Ricky Bobby Sep 08 '11 at 21:14
  • @Alex: Knock yourself out. I believe it's correct and will look for a proof. – PengOne Sep 08 '11 at 21:14
  • I think this is great. It's not as easy for me to see that it works as mine is, but I think it's probably faster. Thanks! – Daniel Sep 08 '11 at 21:18
  • I believe you can prove this is `O(n)`. Let us say that we arrive at a node and it's value is V. Calculating this step will take V time. The total nodes remaining after this step is (n - V). The next step will take U time for value U at that node. After this, the total number of nodes remaining is (n - V - U). Because the time at each step is going to take the value of that step, and the sum of each step will be `n` (since it has to travel all n nodes to be complete) your time will also be the sum of each node. – corsiKa Sep 08 '11 at 21:53
  • Another way to look at this is... at each step, you know there will be no overlap. By starting at the end of the range, you will be forced to find a value that is better than anything in the previous range. If there is something in the previous range (that would overlap with this range) that was better than anything not in that range but in this range, then that value would have been chosen instead. – corsiKa Sep 08 '11 at 21:57
  • I think this doesn't work actually. Let's change your example a little bit to see if it still works. if the input is Array A which is A = [2 , 4 , 1 , 2 , 3 , 0 , 4 , 2] now if we start at 0 which is A[0] = 2 from 0 you can get to 1 or to 2[A[1] = 4 and A[2] = 1]: from 1 you can get to 5 because A[1] = 4 from 2 you can get to 3 because A[2] = 1 therefore max(0+A[0+x]) is for i = 1 but as you can see A[5] = 0 and we can't move ahead so this approach doesn't work. – Ali_IT Jan 27 '14 at 17:49
4

Treat the array of numbers as a graph and then the problem is equivalent to the Shortest Path Problem, which can be solved in O(|E|+|V|log|V|) time using Dijkstra's algorithm.

E = of the numbers.

V = # of numbers.

Kendall Hopkins
  • 43,213
  • 17
  • 66
  • 89
  • But E is O(n^2) here because each number might be connected to O(n) other numbers. This might not be any better than the OP's solution. – templatetypedef Sep 08 '11 at 20:13
  • Also, you cannot transform the graph in O(N) time. For each of the O(N) nodes, there are potentially O(N) successors, giving an O(N^2) algorithm. – templatetypedef Sep 08 '11 at 20:15
  • Looking at it again, I'm pretty sure you could treat it as a graph *already*. Since all the information to "hop" nodes is already contained in the structure defined by the OP without pre-calculations. – Kendall Hopkins Sep 08 '11 at 20:27
2

Use your basic idea, but start from the beginning instead and you can get O(n).

The goal is to make a sequence (A_i1, A_i2, ..., A_ik, ...) such that

  1. positions 0,1,2,...,ik can be reached in k or fewer steps

  2. positions i(k-1)+1, i(k-1)+2, ..., ik cannot be reach in fewer than k steps

The base case is easy:

i0 = 0
i1 = A[0]

and the inductive part isn't too complicated:

i(k+2) = max { A_(ik+1) + ik , A_(ik+1) + ik+1, ..., A_(i(k+1)) + i(k+1) }
PengOne
  • 48,188
  • 17
  • 130
  • 149
  • 1
    Is this really O(n)? Doesn't the DP at each step take O(n) time in the worst-case? – templatetypedef Sep 08 '11 at 20:32
  • I'm pretty sure it is O(n) because determining i(k+2) only looks at two entries. I'll have to code it up to be sure. I'll post back a coded version and proof or retraction of the O(n) claim – PengOne Sep 08 '11 at 20:37
2

I'll go against the flow and tell you that your algorithm is "perfect".

It uses dynamic programming in its cleanest form, and its complexity is not so bad. In this sense, I'd say it is likely to be what was expected from you at the interview.

If you have a bound on the entries (say A[i] <= C(N)), then its complexity is O(N * max(C(N), N)). For instance, if all the entries are less than K, it is O(N).

Using Dijkstra's algorithm (or more generally reducing the problem to a shortest path problem) is smart, but I rank it behind the clean DP solution, since graph algorithms are complex (and it could backfire at an interview if you were asked about them).

Note that Dijkstra would be O(N C(N) + N log N) instead (N vertices, and N C(N) edges). So depending on C, you are either strictly better or equal in complexity.

Alexandre C.
  • 55,948
  • 11
  • 128
  • 197
1

You could formulate it as a graph algorithm (really, what problem can't be?). Let the positions in the array be the vertices, and the possible destinations have an edge from each vertex. In your example, vertex 0 would have edges to 1 and 2, while vertex 1 would have edges to 2, 3, 4 and 5.

There are several efficient graph search algorithms. For instance, Dijkstra's is O(|E| + |V|log|V|), and A* is O(log h*), which is better if you can come up with a good heuristic.

carlpett
  • 12,203
  • 5
  • 48
  • 82
  • But then aren't there n^2 edges? So my answer is just as good as Djikstra's, right? I don't know what A* and h* are. – Daniel Sep 08 '11 at 20:18
  • 1
    Yes, you are probably as good as Dijkstra's. [A*](http://en.wikipedia.org/wiki/A*_search_algorithm) however, is directed by a heuristic (the `h` in my answer), and will search the most promising edges first and will therefore probably beat your algorithm. For a graph with `n^2` edges for instance, it will find the solution in one step (since for that case there is an edge directly to the end) – carlpett Sep 08 '11 at 20:25
  • 1
    @Daniel: A* is another shortest-path graph algorithm, where you can use an heuristic of your own. – Alexandre C. Sep 08 '11 at 20:32
0

My method: Create an array reqSteps to store the number of moves an input takes to escape.
Start from the end of the array.
Check if input[i] can escape the array by itself, if yes enter 1 in minSteps, if no, store the minimum of successive input[i] values + 1. Result is minSteps[0];
The top method does not work for the input{ 10, 3, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1 };
It will give 9 as the answer.
Correct answer is 2.

public static void arrayHop()
{
        int[] input = { 10, 3, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1 };
        int length = input.length;
        int answer = calcArrayHop(input, length);

}

public static int calcArrayHop(int[] input, int length) {
    int minSteps;
    int[] reqSteps = new int[length];
    for(int i=0;i<length;i++)
        reqSteps[i]=Integer.MAX_VALUE;
    int nextStep;

    for (int i = length - 1; i >= 0; i--) {
        int minsteps = Integer.MAX_VALUE;
        if (i + input[i] >= length) {
            reqSteps[i] = 1;
        } else
        {
            for (int j = i+1; j <= (i + input[i]); j++) {
                if(j>input.length-1)
                    break;
                if (reqSteps[j] < minsteps)
                    minsteps = reqSteps[j];
            }
        reqSteps[i] = minsteps+1;
        }


    }

    return reqSteps[0];
}

}

0

Dynamic programming solution:

keep track for each element the smallest number of steps you can get there and from where you came. then just simply walk through the array and for each element update the available next positions (from i+1 till i+a[i]).

{ 2 , 4 , 1 , 2 , 3 , 2 , 4 , 2} 
  0

{ 2 , 4 , 1 , 2 , 3 , 2 , 4 , 2} 
  0   1   1 (num of steps)
      0   0 (source)
  ^         (current position)
{ 2 , 4 , 1 , 2 , 3 , 2 , 4 , 2} 
  0   1   1   2   2   2
      0   0   1   1   1
      ^
{ 2 , 4 , 1 , 2 , 3 , 2 , 4 , 2} 
  0   1   1   2   2   2
          ^
etc...

This is O(n+sum(a[i])) .. or a bit less, you don't have to go beyond the boundary of the array.

Karoly Horvath
  • 94,607
  • 11
  • 117
  • 176
0

You can convert the array into a graph and find the shortest path. Here is how the transformation from array to graph should work.

Each array element is an node. And, based on the value in the array element a edge to drawn between the node to the other indices(nodes) we can jump to. Once we have this graph we can find shortest path, which is better than O(n^2).

https://i.stack.imgur.com/bRUYD.png

grdvnl
  • 636
  • 6
  • 9
  • @Daniel: Thanks for pointing out. I am new to SO, didn't expect to see all answers posted around same time. – grdvnl Sep 08 '11 at 21:51
0

My naive approach - going from the start, doing breath-first through all paths ( child nodes are A[i+1] .. A[i+n] ), saving found paths yo some array and then get the shortest paths. Of course all indexes i+n > length(A) are discarded. So it's upper bound is O(n*min(n,max(A[i=0..n])) + n) - in should less than quadratic in practice.

Rostislav Matl
  • 4,294
  • 4
  • 29
  • 53
0

Here's a slight modification of Ricky Bobby's answer, which I'll show to be optimal:

find_shortest_path(A):
    path := [0]
    last := 0
    max_reachable = 0

    while A[last] + last < length(A) :  
        next_hop := x such that max_reachable < x <= A[last] + last and last + A[x] is maximum
        push(path, x)
        max_reachable = A[last] + last
        last := x

    return path

proof of correctness: I'll use induction the nodes of the path created by my algorithm.

the property I'll show is P(i) = the ith node of my path has a 'reach' no smaller than the ith node of any optimal path

where reach is defined as the number of the highest node you can hop to from that node, or +infinity if you can get past the end of the array

P(0) is obvious.

assume that P(k) is true for k >= 0

now consider the (k + 1)th node in the path created by my algorithm. Since my algorithm chose node k so that it had at least the same reach as that of the optimal path's node k, the set of nodes which may possibly be the (k + 1)th node for my algorithm is a superset of the same for any optimal path. Since my algorithm chooses the node with the greatest reach, it follows that P(K + 1) is true.

by induction, P(k) is true for all k (up to the size of the path created).

since my algorithm will end as soon as the end of the array is in reach, and this will happen no later than for any optimal path, it follows that the path created by my algorithm is optimal.

proof of optimality: each cell of the array is considered at most once, so it's O(n), which is asymptotically optimal. I don't think it's possible to design an algorithm which checks fewer cells in every case.

Bwmat
  • 4,314
  • 3
  • 27
  • 42