12

In our discrete mathematics course in my university, the teacher shows his students the Ackermann function and assign the student to develop the function on paper.

Beside being a benchmark for recursion optimisation, does the Ackermann function has any real uses ?

MoveFast
  • 3,011
  • 2
  • 27
  • 53
Michaël Larouche
  • 837
  • 10
  • 18

3 Answers3

18

Yes. The (inverse) Ackermann function appears in complexity analysis of algorithms. When it does, it means you can almost ignore that term since it grows so slowly (a lot like log(log ... log(n)...)) i.e. lg*(n). For example: Minimum Spanning Trees (also here) and Disjoint Set forest construction.

Also: Davenport-Scinzel sequences

Jonathan Graehl
  • 9,182
  • 36
  • 40
12

The original "use" of the Ackermann function was to show that there are functions which are not primitive recursive, i.e. which cannot be computed by using only for loops with predetermined upper limits.

The Ackermann function is such a function, it grows too fast to be primitive recursive.

I don't think there are really practical uses, it grows too fast to be useful. You can't even explicitly represent the numbers beyond a(4,3) in a reasonable space.

starblue
  • 55,348
  • 14
  • 97
  • 151
3

I agree with the other answer (by wrang-wrang) "in theory".

In practice Ackerman is not too useful, because in practice the only algorithm complexities you tend to encounter involve 1, N, N^2, N^3, and each of those multipled by logN. (And since logN is never more than 64, it's effectively a constant term anyway.)

The point being, "in practice", unless your algorithm complexity is "N times too big", you don't care about complexity, because real-world factors will dominate. (A function that executes in O(inverse-Ackermann) is theoretically better than a function that executes in O(logN) time, but in practice, you'll measure the two actual implementations against real-world data and select whichever actually performs better. In contrast, complexity theory does "matter in practice" for e.g. N versus N^2, where the algorithmic complexity effects do in fact overpower any "real world" effects. I find that "N" is the smallest measure that matters in practice.)

Brian
  • 117,631
  • 17
  • 236
  • 300
  • Indeed, theory analysis only give you the base for performance analysis. – Michaël Larouche Sep 14 '09 at 23:46
  • Can you explain how logN is never more than 64 ? – Frank Q. Apr 25 '12 at 04:59
  • 4
    Usually the "log" is base 2.If log(n) is 64 that means you have 2^64 items of data. That's far more than you would have in practice; indeed on a 64-bit computer you have 64-bit pointers so you can't even easily represent more than 2^64 bytes. – Andrey Aug 06 '12 at 08:47
  • @Andrey Your explanation should be added to this answer. In higher-bit architectures, it still matters to a point. The difference matters in something like shaders on a GPU where every second counts. – Axoren Apr 02 '15 at 01:56