1

I am going to benchmark several implementations of a numerical simulation software on a high-performance computer, mainly with regard to time - but other resources like memory usage, inter-process communication etc. could be interesting as well.

As for now, I have no knowledge of general guidelines how to benchmark software (in this area). Neither do I know, how much measurement noise is reasonably to be expected, nor how much tests one usually carries out. Although these issues are system dependent, of course, I am pretty sure there exists some standards considered reasonable.

Can you provide with such (introductory) information?

shuhalo
  • 5,732
  • 12
  • 43
  • 60
  • Are you benchmarking your own software (which you can modify) or ...? – Rook Sep 03 '10 at 23:56
  • self-written software. As of now, I want to figure out whether some change of memory alignment has any positive effect on computing time (Keyword: Caching). Next to come is a scalability test for some extension of the program. – shuhalo Sep 04 '10 at 01:34

2 Answers2

3

If a test doesn't take much time, then I repeat it (e.g. 10,000 times) to make it take several seconds.

I then do that multiple times (e.g. 5 times) to see whether the test results are reproduceable (or whether they're highly variable).

There are limits to this approach (e.g. it's testing with a 'warm' cache), but it's better than nothing: and especially good at comparing similar code, e.g. for seeing whether or not a performance tweak to some existing code did in fact improve performance (i.e. for doing 'before' and 'after' testing).

ChrisW
  • 54,973
  • 13
  • 116
  • 224
1

The best way is to test the job you will actually be using it for!

Can you run a sub-sample of the actual problem - one that will only take a few minutes, and simply time that on various machines ?

Martin Beckett
  • 94,801
  • 28
  • 188
  • 263