5

I've been self teaching myself data structures in python and don't know if I'm overthinking (or underthinking!) the following question:

  • My goal is come up with an efficient algorithm

  • With the algorithm, my goal is to determine whether an integer i exists such that A[i] = i in an array of increasing integers

  • I then want to find the the running time in big-O notation as a function of n the length of A?

so wouldn't this just be a slightly modified version of O(log n) with a function equivalent to: f(i) = A[i] - i. Am I reading this problem wrong? Any help would be greatly appreciated!

Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485
WycG
  • 141
  • 1
  • 1
  • 5
  • write some code for a start:) Your reasoning about complexity seems to be right. – Ashalynd Aug 02 '14 at 22:00
  • Belongs to programmers SE, not here. – user93353 Aug 03 '14 at 03:29
  • 1
    This boils down to finding zeroes in a function where f(x) = A[x] - x. Using bisection is just one possible way, another one is using a linear approximation between the two edges to find the probable position of the zero. – Ulrich Eckhardt Aug 03 '14 at 13:18

3 Answers3

3

Note 1: because you say the integers are increasing, you have ruled out that there are duplicates in the array (otherwise you would say monotonically increasing). So a quick check that would rule out whether there is no solution is if the first element is larger than 1. In other words, for there to be any chance of a solution, first element has to be <= 1.

Note 2: similar to Note 1, if last element is < length of array, then there is no solution.

In general, I think the best you can do is binary search. You trap it between low and high indices, and then check the middle index between low and high. If array[middle] equals middle, return yes. If it is less than middle, then set left to middle+1. Otherwise, set right to middle - 1. If left becomes > right, return no.

Running time is O( log n ).

Edit: algorithm does NOT work if you allow monotonically increasing. Exercise: explain why. :-)

TheGreatContini
  • 6,429
  • 2
  • 27
  • 37
0
  • You're correct. Finding an element i in your A sized array is O(Log A) indeed.

  • However, you can do much better: O(Log A) -> O(1) if you trade memory complexity for time complexity, which is what "optimizers" tend to do.

What I mean is: If you insert new Array elements into an "efficient" hash table you can achieve the find function in constant time O(1)

This is depends a lot on the elements you're inserting:

  • Are they unique? Think of collisions
  • How often do you insert?
AK_
  • 1,879
  • 4
  • 21
  • 30
0

This is an interesting problem :-)

You can use bisection to locate the place where a[i] == i:

       0  1  2  3  4  5  6
a = [-10 -5  2  5 12 20 100]

When i = 3,   i < a[i],  so bisect down
When i = 1    i > a[i],  so bisect up
When i = 2    i == a[i], you found the match

The running time is O(log n).

Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485