14

It is said that an example of a O(1) operation is that of accessing an element in an array. According to one source, O(1) can be defined in the following manner:

[Big-O of 1] means that the execution time of the algorithm does not depend on the size of the input. Its execution time is constant.

However, if one wants to access an element in an array, does not the efficiency of the operation depend on the amount of elements in the array? For example

int[] arr = new int[1000000];
addElements(arr, 1000000); //A function which adds 1 million arbitrary integers to the array. 

int foo = arr[55]; 

I don't understand how the last statement can be described as running in O(1); does not the 1,000,000 elements in the array have a bearing on the running time of the operation? Surely it'd take longer to find element 55 than it would element 1? If anything, this looks to me like O(n).

I'm sure my reasoning is flawed, however, I just wanted some clarification as to how this can be said to run in O(1)?

Community
  • 1
  • 1
Mr Chasi
  • 437
  • 1
  • 6
  • 18

7 Answers7

25

Array is a data structure, where objects are stored in continuous memory location. So in principle, if you know the address of base object, you will be able to find the address of ith object.

addr(a[i]) = addr(a[0]) + i*size(object)

This makes accessing ith element of array O(1).

EDIT
Theoretically, when we talk about complexity of accessing an array element, we talk for fixed index i.
Input size = O(n)
To access ith element, addr(a[0]) + i*size(object). This term is independent of n, so it is said to be O(1).

How ever multiplication still depends on i but not on n. It is constant O(1).

Rishit Sanmukhani
  • 2,159
  • 16
  • 26
  • Multiplication is constant-time on most architectures, regardless of the operands. The circuit-depth for multiplication of two w-bit numbers is O(log w), and even if it's theoretically problematic to say that the number of bits in a machine word is constant (because if w is constant, then mathematically, 2^w is constant too, so all sizes that fit within a machine word are constant), it's certainly fixed for any given CPU, and a O(log w) circuit depth operation would normally be done within a constant number of clock cycles, regardless of `i`. – njlarsson Apr 05 '22 at 11:41
5

The address of an element in memory will be the base address of the array plus the index times the size of the element in the array. So to access that element, you just essentially access memory_location + 55 * sizeof(int).

This of course assumes you are under the assumption multiplication takes constant time regardless of size of inputs, which is arguably incorrect if you are being very precise

C.B.
  • 8,096
  • 5
  • 20
  • 34
  • 1
    Theoretically, you can't assume multiplication taking constant time. But in our case since it involves numbers independent of input size, we take it to be constant. – Rishit Sanmukhani May 09 '16 at 16:55
  • Right, that was more in the context of "accessing the `ith` index" – C.B. May 09 '16 at 16:59
  • This is the most accurate answer. The assumption is that memory is addressed using numbers expressed in constant bit size. – BKE May 09 '16 at 19:16
  • 2
    Big O notation describes limiting behavior and is rather theoretical concept, where memory (tape) is unlimited and linear - so generic "access" operation should be generally considered as O(1), IMO. – Robert Goldwein May 09 '16 at 21:42
4

To find an element isn't O(1) - but accessing element in the array has nothing to do with finding an element - to be precise, you don't interact with other elements, you don't need to access anything but your single element - you just always calculate the address, regardless how big array is, and that is that single operation - hence O(1).

Robert Goldwein
  • 5,805
  • 6
  • 33
  • 38
4

The machine code (or, in the case of Java, virtual machine code) generated for the statement

int foo = arr[55];

Would be essentially:

  1. Get the starting memory address of arr into A
  2. Add 55 to A
  3. Take the contents of the memory address in A, and put it in the memory address of foo

These three instructions all take O(1) time on a standard machine.

njlarsson
  • 2,128
  • 1
  • 18
  • 27
3

In theory, array access is O(1), as others have already explained, and I guess your question is more or less a theoretical one. Still I like to bring in another aspect.

In practice, array access will get slower if the array gets large. There are two reasons:

  • Caching: The array will not fit into cache or only into a higher level (slower) cache.
  • Address calculation: For large arrays, you need larger index data types (for example long instead of int). This will make address calculation slower, at least on most platforms.
Frank Puffer
  • 8,135
  • 2
  • 20
  • 45
  • 1
    I upvoted this answer, because I think this knowledge is increasingly important. However, I object to the “in theory” vs “in practice” dichotomy. Your explanation is true in theory as well, it just requires a less common and more precise theory. (If your theory differs from practice in any important way, you are using an unsuitable theory.) – njlarsson Dec 17 '21 at 15:19
1

If we say the subscript operator (indexing) has O(1) time complexity, we make this statement excluding the runtime of any other operations/statements/expressions/etc. So addElements does not affect the operation.

Surely it'd take longer to find element 55 than it would element 1?

"find"? Oh no! "Find" implies a relatively complex search operation. We know the base address of the array. To determine the value at arr[55], we simply add 551 to the base address and retrieve the value at that memory location. This is definitely O(1).


1 Since every element of an int array occupies at least two bytes (when using C), this is no exactly true. 55 needs to be multiplied by the size of int first.

cadaniluk
  • 15,027
  • 2
  • 39
  • 67
1

Arrays store the data contiguously, unlike Linked Lists or Trees or Graphs or other data structures using references to find the next/previous element.

It is intuitive to you that the access time of first element is O(1). However you feel that the access time to 55th element is O(55). That's where you got it wrong. You know the address to first element, so the access time to it is O(1).

But you also know the address to 55th element. It is simply address of 1st + size_of_each_element*54 .

Hence you access that element as well as any other element of an array in O(1) time. And that is the reason why you cannot have elements of multiple types in an array because that would completely mess up the math to find the address to nth element of an array.

So, access to any element in an array is O(1) and all elements have to be of same type.

displayName
  • 13,888
  • 8
  • 60
  • 75