2

I recently wrote a short Python program to calculate the factorial of a number as a test to see how much faster integer multiplication is compared to floating point multiplication. Imagine my surprise when I observed that it was the floating point multiplication that was faster! I'm puzzled by this and am hoping someone can enlighten me. I'm using exactly the same function for the factorial calculation and simply passing it a float versus an integer. Here is the code:

import time

def fact(n):
    n_fact = n
    while n > 2:
        n_fact *= n - 1
        n -= 1
    print(n_fact)
    return n_fact

n = int(input("Enter an integer for factorial calculation: "))
n_float = float(n)

# integer factorial
start = time.time()
fact(n)
end = time.time()
print("Time for integer factorial calculation: ", end - start, "seconds.")

# float factorial
start = time.time()
fact(n_float)
end = time.time()
print("Time for float factorial calculation: ", end - start, "seconds.")

When I run this program the results vary, but by and large the integer calculation comes out faster most of the time, which is counter to everything I thought I knew (keep in mind, I'm no expert). Is there something wrong with my method of timing the calculation? Do I need to run the calculation thousands of times to get a more accurate measure of the time? Any insight would be appreciated.

DJElectric
  • 349
  • 1
  • 4
  • 18
  • 1
    Just a side note, you should use `timeit` to benchmark running times, it is possible that your results are wrong using this method. – Thomas Schillaci Feb 21 '20 at 12:15
  • 4
    In the first paragraph you stated you observed the `float` operations to be faster, whereas in the last paragraph you said the `int` operations were faster. Which case did you observe? When I time your function with `timeit` I see integers being faster up to about `n = 50`, above which there is a small edge in favour of floating point operations (which I qualitatively would expect given the fixed-size nature of `floats` vs the unlimited-size `ints` in Python). (NB anything above `n = 170` exceeds the range of `float` values.) – Seb Feb 21 '20 at 12:29
  • 2
    on my platform the integer calculation is faster (you should not print in your benchmarked function). integer multiplications are generally faster than floating point multiplications. python integers, however, have [unlimited precision](https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex), while floats are usually C `double` under the hood. depending on whether you are working with large numbers this could affect your results. – sim Feb 21 '20 at 12:29

2 Answers2

1

Thanks for the comments, and the tip about using timeit. When I rerun the code using timeit, I find results similar to what Seb mentions. Namely, the integer calculations are faster for small values (for me, up to about 15) and then the floats are faster (becoming significantly faster for larger values). This is exactly as I would have expected!

DJElectric
  • 349
  • 1
  • 4
  • 18
1

I was curious so I am eviving an old post with an overengineered answer.

Running it for every number between 100 and 1M, skipping every 500 values. range(100, 100_000, 500) gave the following distribution of times.

enter image description here

A t-statistic suggests that, on average, the float calculations tend to be slower than the integer calculations.

  • T-statistic: -3.934632985311052
  • P-value: 0.00011508966730943904

Thus, floats tend to run slower than ints. However, in a practice, I wouldn't bother. The difference is in microseconds. Code

Regarding the unbounded size of integers in Python

import sys
sys.maxsize  # 9.22*10^18 on a 64-bit CPU

Integers up to this size are represented natively as CPU words. Only larger numbers need to use different a representation of large integers.

Echo9k
  • 554
  • 6
  • 9