5

An array is given such that its element's value increases from 0th index through some (k-1) index. At k the value is minimum, and than it starts increasing again through the nth element. Find the minimum element.

Essentially, its one sorted list appended to another; example: (1, 2, 3, 4, 0, 1, 2, 3).

I have tried all sorts of algorithm like buliding min-heap, quick select or just plain traversing. But cant get it below O(n). But there is a pattern in this array, something that suggest binary search kind of thing should be possible, and complexity should be something like O(log n), but cant find anything. Thoughts ??

Thanks

derobert
  • 49,731
  • 15
  • 94
  • 124
ocwirk
  • 1,079
  • 1
  • 15
  • 35
  • Do you mean it **decreases** from 0 to K? – Tom Zych Sep 28 '11 at 18:37
  • No it could decrease from k to any value, less than k and then start increasing again. Its like we have placed two sorted array one after another in a list and we need to find the merging point. – ocwirk Sep 28 '11 at 18:39
  • I've edited the question to hopefully clarify, considering I misunderstood (and apparently wasn't the only one). @JimMischel gets credit for the clear explanation. – derobert Sep 28 '11 at 20:13
  • Are the values necessarily increasing by one, or may they increase by any value? – Ed Staub Sep 28 '11 at 20:36

4 Answers4

4

No The drop can be anywhere, there is no structure to this.

Consider the extremes

1234567890
9012345678
1234056789
1357024689

It reduces to finding the minimum element.

Captain Giraffe
  • 14,407
  • 6
  • 39
  • 67
1

Do a breadth-wise binary search for a decreasing range, with a one-element overlap at the binary splits. In other words, if you had, say, 17 elements, compare elements

0,8
8,16
0,4
4,8
8,12
12,16
0,2
2,4

etc., looking for a case where the left element is greater than the right.

Once you find such a range, recurse, doing the same binary search within that range. Repeat until you've found the decreasing adjacent pair.

The average complexity is not less than O(log n), with a worst-case of O(n). Can anyone get a tighter average-complexity estimate? It seems roughly "halfway between" O(log n) and O(n), but I don't see how to evaluate it. It also depends on any additional constraints on the ranges of values and size of increment from one member to the next.

If the increment between elements is always 1, there's an O(log n) solution.

Ed Staub
  • 15,480
  • 3
  • 61
  • 91
  • Nice! Probably has good average-case behavior, but worst case is still O(n), I think. Suppose the break looked like [... 50, 51, 46, 60, ...]. You would only find that at the lowest level, and might search everything else, depending on where it is. I think you may have had that in mind ("It also depends on any additional constraints on the ranges of values and size of increment from one member to the next.") – Tom Zych Sep 29 '11 at 00:12
  • @Tom - Thanks! Yeah, worst-case is O(n). With some constraints, the test can be changed from "left greater than right" to something more likely to catch the dip. In the extreme case where if the numbers are known to be sequential, you can test whether the right number is _precisely_ x more than the left, which gets it to worst-case O(log n). – Ed Staub Sep 29 '11 at 01:11
1

It can not be done in less then O(n).

The worst case of this kind will always keep troubling us -

An increasing list a1,a2,a3....ak,ak+1... an

with just one deviation ak < ak-1 e.g. 1,2,3,4,5,6,4,7,8,9,10

And all other numbers hold absolutely zero information about value of 'k' or 'ak'

Nitin Garg
  • 2,069
  • 6
  • 25
  • 50
0

The simplest solution is to just look forward through the list until the next value is less than the current one, or backward to find a value that is greater than the current one. That is O(n).

Doing both concurrently would still be O(n) but the running time would probably be faster (depending on complicated processor/cache factors).

I don't think you can get it much faster algorithmically than O(n) since a lot of the divide-and-conquer search algorithms rely on having a sorted data set.

Dominik Grabiec
  • 10,315
  • 5
  • 39
  • 45