-1

How can we calculate the simulated running time of an algorithm based on its time complexity? For instance, if we know that the time complexity of an algorithm is O(n), then what will be its running time in a discrete event simulator? I believe the simulator needs to take into account the capacity of the node running the algorithm, but how do we use these to calculate the running time.

In the simulator, I have a central node which broadcasts messages. Each receiving node of the broadcasted message handles the message like this:

handleMessage(){

   processMessage();

   replyToCentralNode
}

In the simulator, how do I calculate the execution time of processMessage? Can I get something like 10s, or 0.1ms?

Andrei
  • 7,509
  • 7
  • 32
  • 63
  • As you said, you must know the `number of operations per second` the `cpu` can perform. Once you know that, you analize your algorithim, line by line, to find out the number of atomic operations are performed. Once, you know that, just take the ration between the two numbers found previously, and you shall find an approximate number of seconds that the execution will take. – NiVeR May 04 '17 at 10:41
  • Actually you can't calculate the running time if you just have big-O. It is only an upper bound, so the algorithm may perform much better and it is only valid for big enough n. For smaller n everything is possible. – Henry May 04 '17 at 11:41
  • This is like saying, "How can we know what kind of material the house is made of from the color of the paint?" Just as the whole purpose of paint is to cover the underlying material, the whole purpose of big-O is to remove from consideration all constant factors in the run time of programs. Your question is, consequently, total nonsense. – Gene May 06 '17 at 02:08
  • About the best you can do is this. If you know processing N1 events in your Omega(N) sim takes T1 seconds and N2 takes T2, then you can be fairly sure the time for n events is very roughly T(n) = T1 + (n - N1)(T2 - T1) / (N2 - N1) . This is simple linear interpolation, which makes sense because the algorithm is linear time. – Gene May 06 '17 at 02:18

1 Answers1

0

Without more information, the best you can do for algorithmic complexity O(f(N)) is to evaluate C.f(N) where C is a machine-dependent constant. To get realistic C, you can estimate (or benchmark) the running time on a real machine, for some N (not too small, not too large).

For a given architecture, C will be roughly proportional to the clock speed. Across architectures and generations, C will fluctuate by a small factor.

You can also use the MIPs or FLOPs specifications of the machine, but always in a relative way: get the true running time on a machine with known *Ps, and extrapolate proportionally for another.

But all this is very very crude. And keep in mind that the variance of the running-time for fixed N can be huge, and that usually f(N) is a bad approximation/upper bound.

  • Is there a better way to do this? What I need to do is to evaluate the performance of some algorithm. Basically, a node gets a message, does something with a message and then sends a reply. Now, in my discrete event simulator I can get the time it takes something between nodes, but it doesn't show the processing time. Do I have the wrong approach? – Andrei May 04 '17 at 18:05
  • @Andrei: I have no idea what you mean. –  May 04 '17 at 19:05