2

we are running a 10Gbit server. Testing whith iperf works perfect: [ 3] 0.0-10.0 sec 10.6 GBytes 9.15 Gbits/sec

But when using rsync (with ssh on another port: rsync -Pe 'ssh -p xxx') the bandwidth is poor: 8,589,918,208 100% 129.04MB/s 0:01:03 (xfr#1, to-chk=0/1)

What could cause this limitation?

Thanks

Barmi
  • 439
  • 1
  • 6
  • 15
  • What disks are you reading from and writing to? How many spindles? How are they connected? How much cache is there? is write-caching enabled? – marctxk Apr 27 '17 at 11:43
  • Have you tried other protocols (http, ftp, nfs)? does ethtool report errors? – bgtvfr Apr 27 '17 at 12:04
  • 1
    Could be the SSH protocol, or better the cipher used. You could try `rsync -e "ssh -c arcfour" ...`. There is some more information [here](https://bbs.archlinux.org/viewtopic.php?id=136713). But that also affects your security. – Thomas Apr 27 '17 at 12:20
  • 1
    To rule out disk slowness on either side, copy from /dev/zero and to /dev/null – jamesbtate Apr 27 '17 at 12:56
  • rsync on local disk works pretty fast. So no problem with the disks and no problem with the cipher (With arcfour it's a bit faster, but far away from iperf rates). But http is also slow... Any ideas what else can be tested or how to test? – Barmi Apr 27 '17 at 13:06

1 Answers1

0

(this should be a comment, but its a bit long)

It is very unlikely to be caused by the cipher suite (if this were the bottleneck then you'd see one of CPUs saturated). Nor is it likely to be your disk I/O (this is easy to determine experimentally using tools like fio or Bonnie++ - don't use 'dd' as that just measures streaming writes)

Rsync is a rather chatty protocol; throughput will be limited by your RTT. While its easy to increase bandwith, it's difficult to decrease latency. You could confirm this using tc and increasing the latency / measuring the impact.

symcbean
  • 21,009
  • 1
  • 31
  • 52
  • Any idea why iperf and rsync differ so much? Even if rsync is chatty the difference should not be as big as it is... – Barmi Apr 27 '17 at 14:10
  • I thought I had said why I suspect its slower; iperf just streams a load of data, rsync plays ping-pong. – symcbean Apr 27 '17 at 14:29
  • Yes... I read you answer. But ping-pong does not cost such a lot of bandwidth. With rsync I transferred one big file. So the over head should be minimal! – Barmi Apr 27 '17 at 17:45
  • You are not understanding the fundamental difference between bandwidth and throughput. At the TCP layer, the ssh layer and the rsync, the data is split into small chunks and there is a limit to the number of chunks which will be sent between acknowledgements from the receiver. At the rsync layer thats a 1:1 relationship. If you don't beleive me point wireshark at the traffic and you'll see for yourself. – symcbean Apr 27 '17 at 22:30