0

Doesn't a list require O(n) time to increase its size? How, then, could heap.heappush be O(log n)?

Lay González
  • 2,901
  • 21
  • 41
  • 2
    "Doesn't a list require O(n) to increase its size" - no? Or at least, it's only O(n) worst case. It's O(1) amortized. – user2357112 Nov 04 '20 at 03:57
  • You might be thinking of insertion time, but `heapq.heappush` isn't a list insertion operation. – user2357112 Nov 04 '20 at 03:59
  • I was confused. I thought that lists were arrays. – Lay González Nov 04 '20 at 04:02
  • 1
    @LayGonzález: They maintain arrays under the hood, but they're resizable, and they overallocate (maintaining size and capacity separately; there's usually some space available to append elements without resizing) to make `append`s and `pop`s from the end amortized `O(1)`. Think C++ `vector`, Java `ArrayList`, etc. – ShadowRanger Nov 04 '20 at 04:31

1 Answers1

2

A list has amortized O(1) appends; every once in a long while, it needs to expand the underlying capacity, but usually an append just needs to claim already allocated capacity.

So yes, every once in a while, heapq.heappush will incur O(n) work to reallocate the underlying list's storage, but the vast majority of the time, adding the extra item (done via append internally) is O(1), which is followed by a O(log n) sift down operation to move it to the correct position in the heap (reestablishing the heap invariant); the sift down operation is implemented with element swaps, which are all O(1), not insertions and deletions (which would be O(n) each)

ShadowRanger
  • 143,180
  • 12
  • 188
  • 271