I need to model the latency with several system configuration (single core, multi core, multi node on same server, multi servers) of an LTE simulator. Does anyone have any idea how to calculate the computation amount of a source code (or a part of the whole code, if I want to)? I think the possible approaches are:
- Take the difference in timestamp at the start and end of the execution using clock()
- Total no of operators/Instruction per second(machine dependent)
- Total no of instructions/Instruction per second
3rd is the more general version of 2nd.
The simulator is in Matlab, and I am free to use c (through Mex files).