1

Consider a binary max-heap with n elements. It will have a height of O(log n). When new elements are inserted into the heap, they will be propagated in the heap so that max-heap property is satisfied always.

The new element will be added as the child on the last level. But post insertion, there can be violation of max-heap property. Hence, heapify method will be used. This will have a time complexity of O(log n) i.e height of the heap.

But can we make it even more efficient?
When multiple insert and delete are performed, this procedure makes things slow. Also, it is a strict requirement that the heap should be a max-heap post every insertion.

The objective is to reduce the time complexity of heapify method. This is possible only when the number of comparisons are reduced.

pjs
  • 18,696
  • 4
  • 27
  • 56
taurus05
  • 2,491
  • 15
  • 28
  • 2
    _"But can we make it even more efficient?"_ I'm sure many people would have thought of making it more efficient, but this is an open ended question. Have you thought of any way to do this? – Abhinav Mathur Jan 08 '23 at 05:52
  • 2
    If you're speaking about algorithm complexity, no is not possible to make the insertion/deletion faster because as for now, the complexity to sort (by comparisons) an arbitrary array is O(nlogn) and if you found a way faster that will contradict this.. if you're speaking about computer efficiency (faster algorithm by a constant) it depend on many variables such as code language/cpu/threads etc. anyway there's no faster way if you're speaking on the complexity of insertion/deletion with arbitrary array. – JackRaBeat Jan 08 '23 at 07:20
  • @AbhinavMathur While performing heapify, I found that the elements in the path from the new leaf node inserted in the heap till the root node will always have elements sorted in increasing order, going upwards. Hence, could've used some way to determine the exact position to place the new node. It's like doing some search first (to determine the exact position where the new node will be inserted, and then just insert it at that point). This might help in reducing the number of comparisons and there won't be any propagation of node (which usually happens in heapify method) – taurus05 Jan 08 '23 at 08:23
  • Although you could imagine having fewer comparisons, you would still have the same number of *writes* in your tree (moving of values to another location). You cannot reduce the time complexity. The diff between the heap *before* the insertion and *after* the insertion is O(logn). – trincot Jan 08 '23 at 08:32
  • @trincot If binary search is used, won't the total comparisons decrease? Assuming that we're using array to represent heap. – taurus05 Jan 08 '23 at 08:36
  • 1
    Yes, but that doesn't help. After comparing you still have to move values, and there are O(logn) of them. – trincot Jan 08 '23 at 08:36
  • But earlier, there were O(log n) comparisons. But now, after performing binary search, O(log log n) comparisons are being performed. Is that correct ? Can't we say that it's improvement? – taurus05 Jan 08 '23 at 08:39
  • With some extra overhead that would be possible, but your question (in bold) says *"The objective is to reduce the time complexity of heapify method"*, which is not going to happen. – trincot Jan 08 '23 at 08:40
  • Yeah, that's correct. But i've also mentioned that `This is possible only when the number of comparisons are reduced.` Is this wrong? – taurus05 Jan 08 '23 at 08:41
  • 1
    That is indeed a *necessary* condition, but not a *sufficient* condition. – trincot Jan 08 '23 at 08:42
  • `post insertion, there can be violation of max-heap property. Hence, heapify method will be used` I think [*heapify*](https://en.m.wikipedia.org/wiki/Binary_heap#Building_a_heap) is more often used as the name for turning an *n* item array into a heap - O(*n*) algorithm known. The operation(s) to restore the heap property after appending an item are more often called [sift(-up)](https://en.m.wikipedia.org/wiki/Binary_heap#Insert) and complete an *insert*. – greybeard Jan 08 '23 at 11:11

2 Answers2

3

The objective is to reduce the time complexity of the heapify method.

That is a pity, because that is impossible, in contrast to

Reduce the time complexity of multiple inserts and deletes:

Imagine not inserting into the n item heap immediately,
building an auxiliary one (or even a list).
On delete (extract?), place one item from the auxiliary (now at size k) "in the spot emptied" and do a sift-down or up as required if k << n.
If the auxiliary data structure is not significantly smaller than the main one, merge them.

Such ponderings lead to advanced heaps like Fibonacci, pairing, Brodal…

greybeard
  • 2,249
  • 8
  • 30
  • 66
2

The time complexity of the insert operation in a heap is dependent on the number of comparisons that are made. One can imagine to use some overhead to implement a smart binary search along the leaf-to-root path.

However, the time complexity is not only determined by the number of comparisons. Time complexity is determined by any work that must be performed, and in this case the number of writes is also O(log) and that number of writes cannot be reduced.

The number of nodes whose value need to change by the insert operation is O(log). A reduction of the number of comparisons is not enough to reduce the complexity.

user3386109
  • 34,287
  • 7
  • 49
  • 68
trincot
  • 317,000
  • 35
  • 244
  • 286
  • 2
    `If [o(log) heap insert] were possible, then heap sort would have become an algorithm with better complexity than O(log)` No. There still would be O(log) heap extract. – greybeard Jan 08 '23 at 11:13
  • Yes I assumed silently that the (claimed) optimisation of bubbling up would also be applicable to sifting down. If this claim does not involve a similar improvement to sifting down, then the complexity of heap sort is not improved. – trincot Jan 08 '23 at 11:15