0

Is there a significant difference in functionality and/or efficiency between using heappushpop and peeking at the heap first then deciding whether to pop (using heapreplace)?

E.g.

from heapq import *
a = [5, 18, 9, 14, 22]
heapify(a) # a = [5, 14, 9, 18, 22]
heappushpop(a, 7) # Returns 5 and a = [7, 14, 9, 18, 22]
heappushpop(a, 2) # Returns 2 and a = [7, 14, 9, 18, 22]
from heapq import *
def customPop(heap, val):
    if heap and heap[0] >= val:
        return val
    
    return heapreplace(heap, val)
    
a = [5, 18, 9, 14, 22] 
heapify(a) # a = [5, 14, 9, 18, 22]
customPop(a, 7) # Returns 5 and a = [7, 14, 9, 18, 22]
customPop(a, 2) # Returns 2 and a = [7, 14, 9, 18, 22]
g999
  • 173
  • 1
  • 9
  • 3
    You do know that you have the source code for `heapq` on your system, right? `heappushpop` is essentially implemented like your `customPop` function. – Tim Roberts Aug 11 '22 at 17:06

1 Answers1

0

The custom method appears to be about 1.5 times slower.

Benchmarked with ipython and python 3.10.3 with:

%%timeit
a = [5, 18, 9, 14, 22]
heapify(a)
<statement>
statement %%timeit
heappushpop(a, 7) 365 ns ± 3.87 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
heappushpop(a, 2) 326 ns ± 3.74 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
customPop(a, 7) 539 ns ± 7.39 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
customPop(a, 2) 475 ns ± 27.3 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
ljmc
  • 4,830
  • 2
  • 7
  • 26