0

When trying to properly understand Big-O, I am wondering whether it's true that O(n log n) algorithms are always better than all O(n^2) algorithms.

Are there any particular situations where O(n^2) would be better?

I've read multiple times that in sorting for example, a O(n^2) algorithm like bubble sort can be particularly quick when the data is almost sorted, so would it be quicker than a O(n log n) algorithm, such as merge sort, in this case?

  • 4
    Possible duplicate of [Which is better: O(n log n) or O(n^2)](https://stackoverflow.com/questions/23329234/which-is-better-on-log-n-or-on2) – AbcAeffchen May 27 '18 at 21:22
  • Try writing the code for n-digit multiplication using the naive "schoolchild" method (O(n^2)) and then try to implement it using the Schönhage–Strassen algorithm (only slightly worse than O(n log n)). You will find the naive method has some advantages in implementation difficulty :) – Paul Hankin May 28 '18 at 08:09

4 Answers4

1

O(n log n) is better than O(n2) asymptotically.

Big-O, Big-Theta, Big-Omega, all those measure the asymptotic behavior of functions, i.e., how functions behave when their argument goes toward a certain limit.

O(n log n) functions grow slower than O(n2) functions, that's what Big-O notation essentially says. However, this does not mean that O(n log n) is always faster. It merely means that at some point, the O(n log n) function will always be cheaper for an ever-rising value of n.

Big O

In that image, f(n) = O(g(n)). Note that there is a range where f(n) is actually more costly than g(n), even though it is bounded asymptotically by g(n). However, when talking limits, or asymptotics for that matter, f(n) outperforms g(n) "in the long run," so to say.

cadaniluk
  • 15,027
  • 2
  • 39
  • 67
  • I understand that, but what I don't understand is what possible situations can occur in which O(n^2) is better than O(n log n) and how that is possible (ignoring hardware issues, etc.)? –  May 27 '18 at 21:34
  • @Laura Take a function that describes the runtime of a program, which first prints out all the numbers from 1 to 10^100 and then executes merge sort on an input: T(n) = O(n log n) + c. c is a constant factor here and it is *huge*: after all, we're printing from 1 to 10^100! It is a constant, though, because it is independent of the input size n. However, T(n) is O(n log n)! Now take a simple bubble sort algorithm, which runs in O(n^2). Which one is going to be faster? – cadaniluk May 27 '18 at 21:37
1

No, O(n log n) algorithms are not always better than O(n^2) ones. The Big-O notation describes an upper bound of the asymptotic behavior of an algorithm, i.e. for n that tends towards infinity.

In this definition you have to consider some aspects:

  1. The Big-O notation is an upper bound of the algorithm complexity, meaning that for some inputs (like the one you mentioned about sorting algorithms) an algorithm with worst Big-O complexity may actually perform better (bubble sort runs in O(n) for an already sorted array, while mergesort and quicksort takes always at least O(n log n));
  2. The Big-O notation only describes the class of complexity, hiding all the constant factors that in real case scenarios may be relevant. For example, an algorithm that has complexity 1000000 x that is in class O(n) perform worst than an algorithm with complexity 0.5 x^2 (class O(n^2)) for inputs smaller than 2000000. Basically the Big-O notation tells you that for big enough input n, the O(n) algorithms will perform better than O(n^2), but if you work with small inputs you may still prefer the latter solution.
SimoV8
  • 1,382
  • 1
  • 18
  • 32
0

In addition to @cadaniluk's answer:

If you restrict the inputs to the algorithms to a very special type, this also can effect the running time. E.g. if you run sorting algorithms only on already sorted lists, BubbleSort will run in linear time, but MergeSort will still need O(n log n). There are also algorithms that have a bad worst-case complexity, but a good average case complexity. This means that there are bad input instances such that the algorithm is slow, but in total it is very unlikely that you have such a case.

Also never forget, that Big-O notation hides constants and additive functions of lower orders. So an Algorithm with worst-case complexity O(n log n) could actually have a complexity of 2^10000 * n * log n and your O(n^2) algorithm could actually run in 1/2^1000 n^2. So for n < 2^10000 you really want to use the "slower" algorithm.

AbcAeffchen
  • 14,400
  • 15
  • 47
  • 66
0

Here is a practical example. The GCC implementations of sorting functions have O(n log n) complexity. Still, they employ O(n^2) algorithms as soon as the size of the part being sorted is less than some small constant. That's because for small sizes, they tend to be faster in practice.

See here for some of the internal implementation.

Gassa
  • 8,546
  • 3
  • 29
  • 49