For very large factorials:
DO NOT multiply integers, or even floating point numbers. Compute the logarithms of the numbers, and add the logarithms.
For example, if you are computing the probability of the values of a binomial probability distribution (say for flipping a coin 1000 times), you will need to compute 1000 factorial and divide it by the factorials of the number of heads and the factorial of the number of tails.
1000 factorial would have 2570 decimal digits, with the first digit a "1" and the last 200 digits being "0", with a dog's breakfast of digits in between. Approximated as a floating point decimal number, 1000! is 1.8313956e2570.
But if you are calculating many numbers, for example the table of probabilities for a flipped coin, it is far easier to add logarithms, then convert with exp(), rather than multiplying up to thousand digit numbers.
You can find libraries for "bigint" math, but for solving practical problems (Is the coin biased, and how certain can I be of that? At 6 sigma computed bias, I shoot the crooked coin owner), why bother?
The natural logarithm of 1000 factorial is less than 6000; you can probably compute a million factorial this way. Using a 64 bit float, the enormous exponent for 1M! will eat up all your significant digits, but "bigfloat" math will give you adequate accuracy for calculations.
That said, a million coin flips and subsequent calculations will age the participants to death.
For practical calculations like the binomial formula, you end up dividing by two "half-largish" factorials and a heap of 2s. So, you can schedule some "divides" (pairing/cancelling terms and subtracting some logarithms from the total) through the course of the computation, to keep the sum small and the floating point accuracy good.
I'm sure that Real Numerical Mathematicians have many accuracy-preserving tricks for calculations like this, simple ways to pair and cancel factors in the fraction's numerator and denominator so that you are only calculating what's necessary, and not growing floating point number logarithms into the reduced mantissa accuracy zone.
If any math professionals are reading this, please translate what you know about factorials and their uses into halfwit language, and explain to us confident idiots how the pros actually do this. I'll give you a Starbucks gift coupon, which you can transform into theorems.