0

I used PerfMon on Windows XP for checking network load of an application that I have written.

In the below example you see five columns:

Date Time, Bandwidth, [x] Bytes per seconds sent, [x] Bytes per second received, [x] Total Bytes per second

[x] == The network interface that I checked the load against

Here's the data.

02/18/2014 15:30:50.894,"1000000000","922.92007218169454","826.92838536756381","1749.8484575492582"
02/18/2014 15:30:51.894,"1000000000","994.06970480770792","774.05427718427154","1768.1239819919795"
02/18/2014 15:30:52.894,"1000000000","1446.0226222234514","1319.0206353476713","2765.0432575711229"
02/18/2014 15:30:53.894,"1000000000","2652.0592714274339","1207.0269760983833","3859.0862475258173"

Date, Time and bandwidth (10^9 bit = 1Gbit (lan connection)) are obviously correct.

The other 3 columns are hard to interpret! It says the unit is bytes per second for each but how can the system resolve 14 respectively 13 digits after the decimal dot if these were really bytes?

What is 0.0000000000000001 byte?

Indeed the values are plausible until reaching the dot.

Shiva
  • 20,575
  • 14
  • 82
  • 112
dkeck
  • 1,072
  • 1
  • 8
  • 13

1 Answers1

0

The timer's resolution is higher than shown. You might send 923076 bytes in 100003 microseconds, so the trace shows 100 milliseconds and ignores the microseconds in the time column, but calculates 923076/100003 for the bytes per seconds column. Note i made up the numbers, doesn't make much sense to find a pair that gives your 922.9200... exactly.

Guntram Blohm
  • 9,667
  • 2
  • 24
  • 31
  • Thanks! But I still have a question. It sounds reasonable that it has to do with the timer. The lengths of the values are always 17 digits plus 1 dot, so 18 symbols for representation. This would mean I could resolve up to femtoseconds in the above example which is probably not possible. So the intention is to give a proper result on let’s say 100% NIC utilization. We would still be able to SHOW/PRESENT a resolution of up to 0.1 microseconds with our 18 symbols... – dkeck Feb 18 '14 at 19:17
  • ...But why isn’t then the value simply filled with zeros in the “un-resolvable zone” or simply omitted. What measured the system? 922,920,072,181,694,540 * 10^-15 bytes / (10^15 femtoseconds) gives us the exact number. But the system can’t resolve in this time span so where does it have these (181,694,540) values from? – dkeck Feb 18 '14 at 19:17
  • You'd probably have to ask Bill Gates to be certain, but i'd guess the performance collection stuff handles everything in the same way, from network bytes sent over disk blocks read to processor instructions executed. The part of the code that formats and outputs the string probably has no idea at all what the numbers mean, it just handles everything in the same way. – Guntram Blohm Feb 18 '14 at 19:20
  • Another snippet for my 'Ask the programmer' box. I will leave at that, as always ;). Thanks. – dkeck Feb 18 '14 at 19:30