2

We're currently forced to limit the backup bandwidth to a nfs disk outside our network (thru VPN) because it fills up the modem cache and we are forced to reboot it to regain connectivity.

0 22 * * *   flock rsync_wan_lock -c "rsync --rsync-path=\"nice -n5 ionice -c2 -n3 rsync\" --bwlimit 2000 -avrPq --delete-after /var/data/ /mnt/somedrive"

But that is not sufficient. Even thought 2000 KiB it ought be 50% of transfer speed (we have 30Mbps), it still fills the buffer.

So, I've read that rsync bursts and then goes silent to honour the bwlimit and that metadata does still not honour the bwlimit. So I'm trying trickle now.

The problem is that every doc I could find uses trickle on ssh connections thru the -e option. I don't think -e will work if I'm not copying through ssh, but they give the reason that putting the trickle in the --rsync-path won't work due to the forking rsync does.

0 22 * * *   flock rsync_wan_lock -c "rsync --rsync-path=\"nice -n5 ionice -c2 -n3 trickle -s -u 1000 -d 10000 rsync\" -avrPq --delete-after /var/data/ /mnt/somedrive"

Any ideas/comments? And what's going on with this modem, since when it's so easy to overflow a modem? The previous firewall was capped at 10000 and didn't run into cache problems.

quimnuss
  • 155
  • 8

2 Answers2

0

You can try wondershaper, from a package of the same name, which is a simple script to set global in and out bandwidth limits using the kernel's traffic shaping tc command.

meuh
  • 1,563
  • 10
  • 11
0

As far as your user-level rsync is concerned, there is no network between the source directory /var/data and the destination /mnt/somedrive (the network transfer to the NFS server happens behind the scenes). Therefore trickle cannot work in this use case. On the other hand, the --bwlimit qualifier does work on local transfers.

The ionice option should help, but as the --rsync-path option is ignored on local transfers there's no point trying to apply it there.

See how this works for you

nice -n5 ionice -c2 -n3 rsync --bwlimit 2000K -avP --delete-after /var/data/ /mnt/somedrive

Note that your --bwlimit 2000K is actually 20Mb/s, which is considerably more than 50% of your maximum bandwidth.

roaima
  • 1,591
  • 14
  • 28
  • "Rsync-path does not work on local transfers". That can't be true. At the beginning I had it too strict and it failed to finish the backup overnight and the command does end up being what you suggested if I check it with `htop`. Are you certain of this? – quimnuss Nov 21 '17 at 16:04
  • @quimnuss yes. Try setting it to (say) `/bin/cat` or even `/tmp/doesnotexist` and see what happens with a destination first of `/tmp/localtarget`, and then as `localhost:/tmp/localtarget` – roaima Nov 21 '17 at 16:09
  • The question wrote `--bwlimit 2000` which is 2Mbps. I wonder if the cable modem has a low *upload* of 1Mbps, 30Mbps sounds like a download speed. Also I wonder if the VPN has MTU problems, "cable modem cache" sounds fishy. – axus Nov 21 '17 at 16:20
  • @quimnuss speed - 2000K = 2MB = 20Mb. I usually work on powers of ten with 10 bits/byte to allow for framing overheads. The default unit for `--bwlimit` is KiB/s. – roaima Nov 21 '17 at 16:29
  • @axus please recheck your maths. The 2000 is measured in KiB/s (approx KB/s), which is approx 2 MB/s, or 20 Mb/s. – roaima Nov 21 '17 at 16:32
  • @roaima Thanks! You're correct. Small correction in return, use `2000` instead of `2000K` in your answer. – axus Nov 21 '17 at 16:40
  • @axus I prefer to use explicit units to avoid exactly this kind of error – roaima Nov 21 '17 at 16:45
  • We have around 100Mbps down, 30Mbps up. Oh, they changed the default unit to Kibibytes. Anyway, the difference between your result and mine is the framing overheads that you considered and I didnt. – quimnuss Nov 21 '17 at 16:47
  • It sounded fishy to me as well, thats what the cable company says. But it is consistent with the issue being resolved as soon as we reboot the modem, isn't it? the VPN is established by the firewall – quimnuss Nov 21 '17 at 16:50
  • @roaima I've tested what you said that rsync-path is ignored and could not reproduce. If I set the destination directory to /mnt/doesnotexist it does copy, what was the expected result? – quimnuss Nov 22 '17 at 16:34
  • @quimnuss the destination directory for your copying is nothing whatsoever to do with `--rsync-path`. – roaima Nov 22 '17 at 17:07
  • I've tried your solution with 10MiB, but it still made the modem irresponsive after at most 30 minutes of backup. Would switch to a ssh-based backup help? rather than nfs+vpn ? – quimnuss Nov 24 '17 at 08:52
  • 10MiB, so `--bwlimit=1000K`? – roaima Nov 24 '17 at 23:00
  • goddamit I screwed that up again. I'll set it to 600 KiB/s, which should be around 5Mbps and get back to you. – quimnuss Nov 24 '17 at 23:07
  • 1
    Now it's looking good. I'll see how's it going thru next week and accept the answer. – quimnuss Nov 24 '17 at 23:34
  • May I reopen this? Here's my complete crontab line: `flock rsync_wan_lock -c "nice -n1 ionice -c2 -n2 rsync -avrPq --bwlimit=700KiB --delete-after /var/data/foo /mnt/foo_bk/. >> /var/log/cron.log"` I still got rates of 20 Mbps on the 40s-average of iftop. I re-ran the command on bash and got a nice 7Mbps on 40s-average as expected. Is cron messing something up? – quimnuss Jan 09 '18 at 11:18
  • @quimnuss take a careful look at your `rsync` flags. `-r` is implied by `-a`. Flags `-q` and `-v` are complementary. `-P` is `--partial --progress` but if you're stating `-q` you won't want `--progress`. Your `--partial` is ignored when copying local to local (as you are), so omit that, too. You'll generally be better off with `--delete-during` than `--delete-after`. Try `rsync -aq --delete --bwlimit=700K ...` and make interpretation easier for the person who follows you. – roaima Jan 09 '18 at 15:43
  • It appears it's all caused by rsync's burstiness. I get bursts of 20Mbps on the 10-sec average, but the 40s is fine. Cron has nothing to do with the problem. – quimnuss Jan 10 '18 at 12:19
  • Well, the only way I found to workaround this is to switch to rsync's ssh copy instead of NFS sharing. That means I had to open ssh on the backup server and set the keys. It's not a big deal since that server is only visible from within the VPN network. rsync then uses delta-transfer and many tricks, which otherwise doesn't given that it thinks it's a local transfer. In short, let rsync handle the bandwidth transfer instead of the NFS filesystem. – quimnuss Jan 16 '18 at 11:23
  • @quimnuss that is far far more efficient than using `rsync` over NFS. As you say, for starters you get the delta algorithm. You can ask for protocol compression (`-z`) too. – roaima Jan 16 '18 at 11:47