1

I am now implementing the Fibonacci heap according to the original Paper of Fredman and Tarjan. If I understand it correctly, according to the paper, to perform the DecreseKey operation of a node x, we simply cut it from its parent. But if the key after decreasing is still larger than its parent, it would be inefficient (I think). Also, I see many designs that cut a node only when its key becomes smaller than its parent, like in CLRS.

So I am a bit confused about the original design of it. Why didn't they apply a more efficient way to do DecreaseKey. Or maybe it makes the amortised analysis easier? Any response is appreciated. Thanks in advance.

Snjór
  • 61
  • 7

1 Answers1

1

I can't speak for Fredman and Tarjan (though I audited one of Tarjan's classes once), but presumably they were focused on the worst-case amortized complexity of DecreaseKey, on which that optimization has no effect.

David Eisenstat
  • 64,237
  • 7
  • 60
  • 120
  • Thank you, David! Yeah, that might be the case. It's just I found that the practical performance of two versions differs a lot. So I'm kinda curious about why they designed this way (although both yield the same amortized bound). – Snjór Mar 13 '19 at 22:22
  • @Snjór It's simpler, and theoreticians largely only care about the theoretical worst-case anyway. – David Eisenstat Mar 13 '19 at 22:25