The best way is to have a very close look at the algorithm and analyze each step to calculate average and worst-case runtime class.
If that's not feasible, you can run the algorithm with relatively small numbers, and compare them to each other. If the runtime is exponential in order of any of the parameters, it should be blatantly obvious even with a difference of 10 or 20. Simply plotting the runtimes for, say
- x = 10 and y in range(50)
- y = 10 and x in range(50)
- x in range(50), y=x
should give you a rough idea. You can abort early when runtime grows to large, say larger than 10000 times the runtime of (1,1)
.
This should give you a rough estimate, but you should be well aware that it's neither precise (your test data may inadvertently follow certain patterns and hit a good case) nor sufficient (The factors involved may be very small - you won't correctly identify, say, x + 0.0001 * 1.05^y
). Fortunately, in many cases, the bases in exponential algorithms are significantly larger than 1.
In Python, you can use the timeit
module to correctly measure the runtime.