10

It's quite easy to see that n! grows slower than almost anything to the N power (say, 100^N) and so, if a problems is considered NP complete and one happened upon a n! algorithm that approximates the solution, one would do the Snoopy dance.

I have 2 questions about this situation:

  1. Would the n! algorithm be considered a solution in polynomial time? A factorial certainly doesn't appear to be a term raised to a power.
  2. If finding a n! solution means we have a decently fast algorithm and since n! grows faster than 2^N, then does this mean that some NP-complete problems do not need heuristic/approximation algorithms (except for obscure cases)?

Of course, these two questions rely on the first paragraph being true; if I've erred, please let me know.

Zian Choy
  • 2,846
  • 6
  • 33
  • 64
  • n! does grow faster than k^n. Consider, once n > k then n! will certainly grow faster. – Rafe Jun 04 '11 at 06:26
  • I don't understand your comment because n! evaluated for some value of n such that n is greater than k is a single number (example: if k = 10, 10! = 3628800). – Zian Choy Jun 04 '11 at 22:27
  • 1
    Consider the first 100 terms of n! and 100^n. Up to this point, 100^n has been growing faster than n! (i.e., each factor has been larger in 100^n vs n! -- 1 vs 100, 2 vs 100, ...). However, past this point, each term in n! is going to be larger than the corresponding term in 100^n (101 vs 100, 102 vs 100, ...). – Rafe Jun 06 '11 at 02:03

2 Answers2

41
  1. No. factorial time is not polynomial time. Polynomial time normally means an equation of the form O(Nk), where N = number of items being processed, and k = some constant. The important part is that the exponent is a constant -- you're multiplying N by itself some number of that's fixed -- not dependent on N itself. A factorial-complexity algorithm means the number of multiplications is not fixed -- the number of multiplications itself grows with N.

  2. You seem to have the same problem here. N2 would be polynomial complexity. 2N would not be. Your basic precept is mistaken as well -- a factorial-complexity algorithm does not mean "we have a decently fast algorithm", at least as a general rule. If anything, the conclusion is rather the opposite: a factorial algorithm may be practical in a few special cases (i.e., where N is extremely small) but becomes impractical very quickly as N grows.

Let's try to put this in perspective. A binary search is O(log N). A linear search is O(N). In sorting, the "slow" algorithms are O(N2), and the "advanced" algorithms O(N lg N). A factorial-complexity is (obviously enough) O(N!).

Let's try to put some numbers to that, considering (for the moment) only 10 items. Each of these will be roughly how many times longer processing should take for 10 items instead of 1 item:

O(log N): 2
O(N):10
O(N log N): 23
O(N2): 100
O(N!): 3,628,800

For the moment I've cheated a bit, and use a natural logarithm instead of a base 2 logarithm, but we're only trying for ballpark estimates here (and the difference is a fairly small constant factor in any case).

As you can see, the growth rate for the factorial-complexity algorithm is much faster than for any of the others. If we extend it to 20 items, the difference becomes even more dramatic:

O(log N): 3
O(n): 20
O(N log N): 60
O(N2): 400
O(N!): 2,432,902,008,176,640,000

The growth rate for N! is so fast that they're pretty much guaranteed to be impractical except when the number of items involves is known to be quite small. For grins, let's assume that the basic operations for the processes above can each run in a single machine clock cycle. Just for the sake of argument (and to keep the calculations simple) let's assume a 10 GHz CPU. So, the base is that processing one item takes .1 ns. In that case, with 20 items:

O(log N) = .3 ns
O(N) = 2 ns
O(N log N) = 6 ns
O(N2) = 40 ns
O(N!) = 7.7 years.

Jerry Coffin
  • 476,176
  • 80
  • 629
  • 1,111
  • 1
    Thank you so much for confirming my gut feelings. >"the number of multiplications itself grows with N." To be clearer, since even an O(n) algorithm would make that statement true, increasing the sample size (n) would increase the number of multiplications in a series that grows faster than something that's polynomial time. (more to come in another comment) – Zian Choy Jun 04 '11 at 22:28
  • Re: Latter part of the comment and my mistake about ! vs. ^n I've been tricked! (Or at least, I should've looked at the WolframAlpha graphs for greater values of X and Y; see http://www.wolframalpha.com/input/?i=x%21+vs.+5%5Ex). Thanks for correcting me. – Zian Choy Jun 04 '11 at 22:32
  • 1
    For years I've had this bizarre misconception that $O(n) < O(n!) O(n^2)$. I see how absurd that idea is now. Thanks for clearing things up! – Zaz Oct 05 '15 at 01:30
5

It's quite easy to see that the factorial is (approximately) exponential in behaviour.

It can be (very crudely) approximated as nn (more specifically, sqrt(2πn)(n/e)n).

So if you have found any specific M where you think Mn is a good approximation, you're (probably) wrong. 269! is larger than 100n and as n! will be multiplied by numbers larger than 100, it will continue to grow faster.

Vatine
  • 20,782
  • 4
  • 54
  • 70
  • 1
    +1 for Stirling's approximation. Although unless you derived it yourself, you probably should have mentioned his name. :-) – Nemo Jun 04 '11 at 16:05
  • Yep, upon further examination, even the confusing WolframAlpha graph (http://www.wolframalpha.com/input/?i=x%21+vs.+5%5Ex) reveals that n! and 5^x are more or less translated versions of each other. – Zian Choy Jun 04 '11 at 22:34
  • @Nemo: Aye, I usually use n^n as a rough approximation myself. – Vatine Jun 05 '11 at 11:04