2

I want to measure bandwidh using c#. Here what I did. Comments and suggestions are welcome.

  1. Find maximum udp payload(on my test bed, its 1472 byte)
  2. Create non compressible data with 1472 byte size
  3. Send this data from a server to a client multiple times(on my test, its 5000 packets)
  4. Client start stopwatch at the time the first packet arrive
  5. When all data has been sent, send notification to client stating all data has been sent
  6. Client stop stopwatch
  7. I calculate bandwidth as (total packet sent(5000) * MTU(1500bytes)) / time lapse
  8. I notice that some packets are loss. a best, 20% loss. at worst 40% loss. I did not account this when calculating the bandwidth. I suspect client network device experience buffer overrun. Do I need to take account this factor?

If you guys have any suggestion or comment, feel free to do so.

Thanks.

Nate
  • 30,286
  • 23
  • 113
  • 184
Syaiful Nizam Yahya
  • 4,196
  • 11
  • 51
  • 71
  • 1
    if you want to measure how much data you can transfer from computer a to computer b i would not use udp because it's unreliable. you cannot be sure if your package has arrived or not. so when you send 5000 packets there might a chance that the server did not receive all packets (as you have already noticed). i would use tcp or some other reliable protocol where you can really measure the throughput. otherwise its just the throughput that your computer a can send. or you collect all data at computer b and after all packets are sent computer b sends the amount of data received (length) back. – Stephan Schinkel Nov 12 '10 at 17:28
  • im not sure what is the correct term, bandwidth or throughput, but my intention is to measure the channel quality(the amount of data the channel can transfer including packet header). packet loss is expected as it relates to the quality of channel. i hope you understand. – Syaiful Nizam Yahya Nov 12 '10 at 17:32
  • 1
    Related, but probably not dupe: http://stackoverflow.com/questions/566139/detecting-network-connection-speed-and-bandwidth-usage-in-c – GWLlosa Nov 12 '10 at 17:56
  • the problem with tcp is that it is almost impossible to measure packet loss. plus data retransmission will skew the bandwidth computationfor example, when you think you get 1mbps, transferred data are actually more considering the tcp packet header and retransfeered data. in reality, it could be 2mbps channel. who knows. – Syaiful Nizam Yahya Nov 12 '10 at 18:03
  • Both your network bandwith test and your real network usage will require retransmission eventually. So you are meassuring kind of a real usage scenario there. Also, comparing measuring error in TCP vs UDP, i would choose TCP. – Gerardo Grignoli Nov 12 '10 at 18:24
  • Another related link: http://stackoverflow.com/questions/2909268/getting-on-the-wire-size-of-messages-in-wcf – Brent Arias Nov 12 '10 at 18:28

1 Answers1

1

To calculate bandwith, I would use TCP instead of UDP. When you use UDP all the datagrams may get out really fast through your network card (at 100mbps) and get queued at the "slowest link" of the chain (e.g. a 512kbps cable modem/router). If the queue buffer gets full, its likely that datagrams will be discarded. So your test is not very reliable.

I would use TCP and make some math to transform tcp speed (KB/s) to throughput (Mbps) (I think TCP overhead is around 8%)

Gerardo Grignoli
  • 14,058
  • 7
  • 57
  • 68