4

When analyzing the time complexity of an algorithm, we normally considered the time of random access of an array is a constant (the size n of the array is not a constant), but why?

Consider the Turing machine model in which an array is stored in the tape, to access an specific element of array, its tape head has to move to that position, which takes O(n) time. Or is there any other method to store an array for a Turing machine so that random access takes only constant time?

Marcus Müller
  • 34,677
  • 4
  • 53
  • 94
xskxzr
  • 12,442
  • 12
  • 37
  • 77
  • 3
    On a Turing Machine, random access to a index in an array is not O(1). Computers are not Turing machines. – Gordon Linoff Sep 26 '15 at 12:39
  • _we normally consider the time of random access of an array is a constant ..., but why?_ Because we don't usually work on Turing machines. – H H Sep 26 '15 at 12:40

4 Answers4

5

Gordon put it quite excellently:

Computers are not Turing machines.

Arrays on "real" typical general purpose machines aren't stored on the "infinitely long" linear storage, but in RAM, Random Access Memory. Technically, (and quite frankly simplifying a lot), you just get any address from RAM by understanding it as a path through the memory addresses. So access to any address takes the same amount of time.

Now, for arrays, you can directly calculate the address of n'th Element by taking the Address of the first element and adding n times the size of a single element.

Remember: Turing machines are a concept of how to prove and understand certain things, and do not reflect reality of how things are actually done. The same goes for complexity calculations: Of course, in reality access to any element in a vector doesn't always take exactly the same time, because assumptions that computer science people need to make to say interesting things about algorithms can't always fully represent every physical machine that can run a given algorithm -- real modern computers have caches and prefetching memory controllers, so that accessing a piece of memory that you "just" visited is much, much, much faster than just getting any memory.

Community
  • 1
  • 1
Marcus Müller
  • 34,677
  • 4
  • 53
  • 94
  • 1
    @TagirValeev: furthermore, if you want to be pedantic, you should read *pedantically* ;) Accessing any address in RAM takes the same amount of time. The CPU, however, doesn't always actually access RAM. – Marcus Müller Sep 26 '15 at 12:55
  • Thanks. So in formal, the time complexity of an algorithm is computed by default in a modern computer rather than a Turing machine? Is there a formal definition for a modern computer? – xskxzr Sep 27 '15 at 00:53
  • @Shenke: No. Not at all. Complexity calculation is typically made under the assumption of having RAM with uniform access time. That neither represents real hardware nor the Turing machine thought experiment. Every good publication describing an algorithm will clearly state the assumption they made when calculating the time complexity -- for example, array index access is O(1) only iff memory access time is uniform. It's typically not uniform on real hardware. – Marcus Müller Sep 27 '15 at 07:40
0

An analysis of a algorithm does not consider overhead due to memory model(cpu registers, RAM and external memory).

The reason for this is simple, it would make analysis non generic. Analysis would then depend on what kind of hardware you are using. This would complicate the analysis and it won't be useful for general purpose.

Harman
  • 1,571
  • 1
  • 19
  • 31
0

Array access is constant time because array elements are stored contiguously. Contrast that with linked list access. In linked list, every node has the address to next node. This forces CPU to go to all n-1 nodes to reach the nth node. But for arrays, you know the address directly by: address_to_first_element + (n-1)*size_of_elements and therefore can access the element in constant time. The assumption here is that data is stored on Random Access Memory.

For Turing Machines, the data is stored on a tape. And tape is a linear access device. The Wikipedia article about the same says:

While they can express arbitrary computations, their minimalistic design makes them unsuitable for computation in practice: actual computers are based on different designs that, unlike Turing machines, use random access memory.

displayName
  • 13,888
  • 8
  • 60
  • 75
0

In an array all the elements are stored in a contiguous memory location.Thus to access any element,address of an element is computed as an offset from the base address of the array and one multiplication is needed to compute what is supposed to be added to the base address to get memory address of the element. So,First the size of an element of that data type is calculated and then it is multiplied with the index of the element to get value to be added to the base address. As this process takes one multiplication and one addition,array elements can be accessed in constant time.

Shrey
  • 21
  • 1
  • 1
  • 4