0

I updated my file server VM (VMware) from CentOS 7 to RHEL 8. I doubled the CPU's and memory on the new system, but for some reason rsync's seem to bog the system down considerably. To the point where users have trouble connecting over ssh. Total CPU usage is low (never above 1%) iotop shows nothing from the user doing the rsync, iftop show some TX/RX, but it's generally around 12Mb TX, 4Mb RX for a 1Gb connection.

In fact all network activity bogs the system down.

I can't seem to find the bottleneck, but it's driving me nuts.

rsync -zuva -e ssh -i ~/.ssh/remotekey user@remote:/home/data /nfs/local/data/

This command used to take about an hour with CentOS 7. It now takes nearly 5 hours for the same amount of data and brings the system to a standstill.

What might have changed in RHEL 8 or rsync or VMware networking to cause this? I'm at a bit of a loss. Is anyone else seeing this issue?

  • Does any network load cause the same bad behavior? For example, how the server behave during an http download or an `iperf` test? – shodanshok Oct 21 '21 at 05:08
  • No, network load doesn't cause this behavior, but you got me headed down a path to check my NFS mounts. Sure enough they're REALLY slow (~3MBps). Since most users would be rsync'ing to their nfs mounted directories, this is probably a NFS issue. I did a quick seach for slow NFS performance and came across https://access.redhat.com/solutions/5953561 and that is an issue. My read_ahead_kb was set really low. I increased it and it still didn't' seem to solve the NFS issue. I am using NFSv3 because most of my clients are using 3. I haven't seen any issues with that being an issue though. – Astro.Bacon Oct 21 '21 at 19:39

1 Answers1

0

After some fiddling with the read_ahead_kb value, that DID turn out to be the solution. https://access.redhat.com/solutions/5953561