I was trying to implement a slightly modified version of the Djikstra's algorithm using the PriorityQueue class for the priority queue. I came across a weird behavior which I couldn't find an explanation for anywhere. I have my suspicions for this behavior but would like to know what others think of it. Could some one explain why the PriorityQueue behaves as it does below:
Since the standard algorithm involves decreasing a node's weight during a 'relax' operation, it would mean fetching the handle to a node and decreasing its weight (I am aware of the method where you back the queue up with a hashmap but this is an experiment). This method involves simply adding another copy of the node into the PriorityQueue. The duplicates are simply ignored after a node has been processed once. The problem however with such an implementation is that the PriorityQueue behaves in a undefined way for duplicates added subsequently. Here's a snippet showing the same:
import java.util.PriorityQueue;
public class PriorityQueueTest {
public static void main(String[] args) {
int[] weight = new int[]{1, 2, 3};
PriorityQueue<Integer> pq = new PriorityQueue<>(weight.length, (a, b) -> weight[a] - weight[b]);
for(int i=0; i<3; i++)
pq.add(i);
//change weight of node '1' from 2 to 0
weight[1] = 0;
//add duplicate node '1'. Note that this time weight 0 by the comparator.
pq.add(1);
while(!pq.isEmpty())
System.out.println(pq.remove());
}
}
With the weight of node '1' decreased and its higher priority subsequently, one would expect the nodes to be removed in the order:
1
1
0
2
The actual result is however different:
0
1
1
2
Also note that if I just work with nodes 0 and 1 (changing the for loop in the above to
for(int i=0; i<2; i++)
), then the expected order of
1
1
0
is the same as the actual order!
You can play with adding more nodes and changing some other node's weight which again gives this unexpected behavior sometimes depending on the number of nodes added after the first duplicate.
My suspicion is when adding a duplicate, the heapify operation is performed during which the old key might have broken the heap invariant (having a higher priority and yet potentially being misplaced after the weight update) and any subsequent heapifies on that old node might leave the heap in a inconsistent state. Thoughts?????