-1

I'm transfering large files between 2 linux servers. They are on the same network with 1 Gbit NIC's, connected to 1 gbit switch, with cat6 ethernet cables. So I started to transfer files to Disk #1 and the average speed was 37 MB/s which is fine. I then started to move files to the other disks, they are all giving me speed of 10 MB/s to 40 MB/s, Mostly 15 average..

What can result that speed? I tried 3 different disks..

Shlomi
  • 331
  • 2
  • 9
  • 19

1 Answers1

0

It is not clear to me what you're complaining about. If Ethernet, build a virtual RAM disk on both client and server and rsync from and to it. Also ensure that you are not tunneling through ssh nor rsync has compression options enabled, otherwise CPU utilization on client or server might have an impact on transfer speed. Doing this will check your ethernet and if there are no issues the bandwidth should not drop/pike but will be constant. After that, benchmark your disks. Both client and server. Probably they are affecting your transfer speed.

You did not provide enough informations to go any further, I just want to point you in the right direction: exclude it's a matter of network, then go check client's io read speed and server's io write speed on all destination disks. I bet the bottleneck it's the destination disks, especially if you run software raid or if they are standalone desktop sata HDDs.

Marco
  • 1,709
  • 3
  • 17
  • 31
  • Hi, I'm using 'rsync -avh' which doesn't compress as far as I know. I transfer files between server #1 with Raid 10 ( 20 Disks, WD Blue 2 TB ), to server #2 with 5 standalone disks, 2 TB different models. I will run a write speed on the destination disk – Shlomi Mar 20 '17 at 06:48
  • You didn't mention network tests. I can't assume it's fine as you didn't provided switch brand/model. I've seen unmanaged cheap gigabit switches perform even worse than this. – Marco Mar 20 '17 at 07:04