-2

I have an algorithm used for signal quantization. For the algorithm I have an equation to calculate its complexity with different values of parameters. This algorithm is implemented in C. Sometimes according to the equation I have less complexity but the running time is higher. I'm not 100% sure about the equation.

My question is running time and algorithm complexity are all the time having straight relation? Means, always the higher complexity we have, the higher running time happens? Or it's different from one algorithm to another?

Chamran Ashour
  • 441
  • 1
  • 4
  • 6

2 Answers2

1

Time complexity is more a measure of how time varies with input size than an absolute measure.
(This is an extreme simplification, but it will do for explaining the phenomenon you're seeing.)

If n is your problem size and your actual running time is 1000000000 * n, it has linear complexity, while 0.000000001*n^2 would be quadratic.

If you plot them against each other, you'll see that 0.000000001*n^2 is smaller than 1000000000 * n all the way up to around n = 1e18, despite its "greater complexity".

(0.000000001*n^2 + 1000000000 * n would also be quadratic, but always have worse execution time than both.)

molbdnilo
  • 64,751
  • 3
  • 43
  • 82
  • Or, to put it a slightly different way, remember that Big-O type complexity deals with limiting behavior at infinity. And, of course, it doesn't account for a great many variables that can have a huge impact in performance. – Nik Bougalis May 25 '16 at 14:02
1

No, running time and algorithmic complexity do not have a simple relationship.

Estimating or comparing run times can easily get very complicated and detailed. There are many variables that vary even with the same program and input data - that's why benchmarks do multiple runs and process them statistically.

If you're looking for big differences, generally the two most significant factors are algorithmic complexity ("big O()") and start up time. Frequently, the lower "big O()" algorithm requires more complex startup; that is, it takes more initial setup in the program before entering the actual loop. If it takes longer to do that initial setup than run the rest of the algorithm for small data sets, the larger O() rated algorithm will run faster for those small data sets. For large data sets, the lower O() algorithm will be faster. There will be a data set size where the total time is equal, called the "crossover" size.

For performance, you'd want to check if most of your data was above or below that crossover as part of picking the algorithm to implement.

Getting more and more detail and accuracy in runtime predictions gets much more complex very quickly.

mpez0
  • 2,815
  • 17
  • 12