2

Forgive me if this is the wrong place - this is my first post.

I've set up a network in Mininet - created two nodes with the same bandwidth and delay (Bandwidth: 10Mbps, Delay: 10ms). When using iperf to test this, I perform the following steps:

  1. Start an iperf server on Node 1 (10.0.0.2) iperf -s
  2. Start the iperf client on Node 2 (10.0.0.3) iperf -c 10.0.0.2
  3. The test completes

Node 2 (the client) shows a bandwidth of 11.2Mbps, and a test time of 10.4 seconds. Node 1 (the server) shows a bandwidth of 9.56Mbps and a test time of 12.2 seconds. Both the client and the server show the same transfer size (13.9Mb).

Is this time difference due to the delay on each individual packet? The TCP window size is 85.6 Kbyte, so adding a 10ms delay to each packet being sent on the network roughly allows for the difference. However, I would have thought the delay would be 20ms as there is a 10ms network delay on both the sending and receiving side - why is this not the case?

I'm hoping this makes sense.

Ben

Ben Freke
  • 21
  • 3

3 Answers3

0

The test start/end should be triggered by a timer. The real start/end time should be involved with some signal notification/handling overhead so that real time from both sides are different.

Howard Shane
  • 926
  • 13
  • 28
0

Unfortunately, mininet shows a set of BUGs unsolved. One of its bugs is correlated to the throughput higher them the available bandwidth. Furthermore, use bandwidth delay shows be another issue that should be considered.

I suggest you to repeat your experiments with 0ms delay.

0

According to my tests, it's just the conversion between 1000 and 1024 that they use. Suppose I have this case:

h1 iperf -s -u -p 2000 -i 1
h3 iperf -u -c 10.0.0.1 -p 2000 -b 20M -i 1

The server and client both use 1000 for conversion by default. Capital and small letters [kmgKMG] (with -b flag) are used to signal what value to use for conversion. In this case, 'M' will use 1024 for conversion and 'm' will use 1000 for conversion. By default, they use small letters hence, 1000. So if we write -b 20M on the client then 20x1024x1024 bits of data is sent. The server receives 20x1024x1024 bits and uses 1000 for conversion i.e (20x1024x1024 bits)/1000/1000=20.97≈21Mbits. This is the value that both the client and server report. We sent 20M (by the parameter) but got ≈21M. Before sending, the values are converted and converted back using different units when displaying. To show 20M on the server as well, use -f M flag to format the output using 1024. In conclusion, be consistent with capital and small letters to indicate the -b and -f flags. If you want to avoid the -f flag on the server, use -b 20m on the client instead.

h3 iperf -u -c 10.0.0.1 -p 2000 -b 20m -i 1
Deo
  • 33
  • 1
  • 7