I have a simple question about using binary search in the insertion sort algorithm. More precisely, at each step of the usual insertion sort, instead of linearly comparing the element with all the elements in the previous (sorted) subarray, we just use binary search in that sorted subarray to find the place where the element belongs.
I know that this reduced the number of comparisons that the algorithm makes (O(log n) instead of O(n)), but the number of swaps needed at each step still dominates and the complexity is still O(n^2).
I also know that complexity is not so easily related to running time. I have tried to compare the running time for both algorithms for "small" values of n (array size), up to about 500000. Binary insertion sort was always faster than usual insertion sort.
The fact that both are O(n^2) tells me that as n gets large enough, the running time should be similar, right? Any idea on what "large enough" would be in this situation to actually see similar running times?