0

I have a few questions regarding binary heaps.

I'm trying to understand the "sift up" method to create the heap, and it's time complexity of Theta (n log n).

   void MakeHeap(T[1..n])
    {
      for (i=2; i<=n; i++)
      sift_up(T[1..n], i);
    }

Worst case boundary: seems to be n repetitions of sift up (n obviously number of nodes), making it O(n log n). Best case boundary: the array being already a binary heap, so no need to sift up, making it Omega (n) - so where does the log n come from?

Also, I've been looking here to try to understand the process, but now I'm even more confused. The O(n log n) building uses sift up ( I think), but starts with an empty array and uses insertion. So no wonder the complexity is O(n log n). The O(n) building uses sift down, with a given array and no insertion, so I'm not sure the comparison is even correct.

So to summarize - not sure why sifting up has a lower bound of n log n, and why some buildings use "insert" and some "sift".

JSON C11
  • 11,272
  • 7
  • 78
  • 65
yoad w
  • 315
  • 3
  • 10
  • Who cares about the best case? Valuable property of heaps (and heapsort, for example) is guaranteed O(nlogn). – MBo Apr 24 '16 at 12:48
  • `Best case boundary: the array being already a binary heap, so no need to sift up, making it Omega (n) - so where does the log n come from?` There is analysis of an _algorithm_ / _procedure_ (`the "sift up" method`), and there is analysis of a _problem_ (_build a heap_). "The method" does not include a check for _already a binary heap_, the argument that a simple check would do simply doesn't apply. Given a method that includes such a check, it would - poor Theta. – greybeard Nov 23 '16 at 05:56

0 Answers0