A naïve quicksort will take O(n^2) time to sort an array containing no unique keys, because all keys will be partitioned either before or after the pivot value. There are ways to handle duplicate keys (like one described in Quicksort is Optimal). The proposed solution only works for the Hoare partition, but I've implemented the Lomuto partition. To deal with duplicate keys, I alternated between moving duplicates to the left of the pivot and moving duplicates to the right of the pivot. The algorithm works something like this:
//partition array from index start to end
select pivot element and move it to array[start]
boolean dupHandler=true;
int index=start;
for(i from start+1 to end)
int val=array[start].compareTo(array[i]);
if(val==0)
if(dupHandler)
swap array[++index] and array[i]
dupHandler=!dupHandler;
else if(val>0)
swap array[++index] and array[i]
swap array[start] and array[index]
Is there a better (more efficient) way to handle duplicate keys?
EDIT: My code (as shown) requires compareTo to be consistent with equals (even thought that's not a requirement)