1

Please see the output JSON file produced with the iperf3 client below, why do we get 10002834326.213295 bits_per_second on the first interval while running iperf3 over a 10GbE network?

Also, the convention is to convert bits_per_second into Gbps dividing by 1000 rather than 1024, right?

Please find below the details.

  • command for iperf3 server:
iperf3 --server
  • command for iperf3 client:
iperf3 --client 192.168.1.22 \
       --window 32M \
       --json \
       --logfile output.json \
       --forceflush
  • kernel configuration
/proc/sys/net/core/rmem_default
124928

/proc/sys/net/core/rmem_max
16777216

/proc/sys/net/core/wmem_default
124928

/proc/sys/net/core/wmem_max
16777216

/proc/sys/net/ipv4/tcp_rmem
4096    87380   16777216

/proc/sys/net/ipv4/tcp_wmem
4096    87380   16777216

/proc/sys/net/ipv4/tcp_window_scaling
1

/proc/sys/net/ipv4/tcp_mtu_probing
0

/proc/sys/net/ipv4/tcp_available_congestion_control
cubic reno

/proc/sys/net/ipv4/tcp_congestion_control
cubic
  • This sounds like it might be due to TCP slow start. In slow start the window size is doubled in every RTT, which can cause you to temporarily be above the line rate for a long period of time. This could also be caused by large values to tcp_wmem because it messes how iperf3 measures rate on the sender side. – Tomer Aug 04 '20 at 17:38
  • Sounds sensible to me, thanks @Tomer – Sebastian Luna-Valero Aug 05 '20 at 10:30

0 Answers0