1

I have a fairly modest Ubuntu box (Jaunty) running as a small webserver. Each night i have a cron job which tar/gzip's important directories and does a simple cp to copy them over to a backup NAS drive which has been SMB mounted locally (effectively an "off-site" backup)

The network connection to the box is 802.11G (54Mbps) so naturally it's quite slow, but the issue is that when the files are being copied, the wireless bandwidth between the webserver and the router is completely saturated by the copy procedure, and web-requests are either denied or incredibly slow to respond.

I've tried using Trickle in standalone mode to throttle the copy procedure, but this didn't appear to make any difference.

Anyone have suggestions or advice? I suspect i need to run some form of QoS on the server but truly have NFI. Was hoping for an easy, silver-bullet solution ;)

Thanks, Xerxes

3 Answers3

2

There is a wput utility which works the other way from the more known wget.
It can be used to upload your files at a controlled rate.

−−limit−rate=RATE

If you don’t want Wput to eat up all available bandwidth, specify this flag. RATE is a numeric value. The units ’K’ (for KiB) and ’M’ (for MiB) are understood. The upload rate is limited on average, meaning that if you limit the rate to 10K and Wput was just able to send with 5K for the first seconds, it will send (if possible) afterwards more than 10K until the average rate of 10K is fulfilled.


About trickle,

trickle is a userspace bandwidth manager. Currently, trickle supports the shaping of any SOCK_STREAM (see socket(2)) connection established via the socket(2) interface. Furthermore, trickle will not work with statically linked executables, nor with setuid(2) executables.

nik
  • 7,100
  • 2
  • 25
  • 30
  • Hi nik - i access the remote storage device using a locally mounted SMB share, so wput wont help here. as for trickle not working with statically linked binaries, "ldd /bin/cp" shows a dynamic link dependency to libc.so.6 which leads me to believe it should be fine. setuid isn't applicable here as the cron runs as root anyway. –  Jul 17 '09 at 00:56
  • Hmm, in that case, i think the other answers are more appropriate. – nik Jul 17 '09 at 05:08
  • But, you could separate the paths by not using the share. Use the `wput` over network independently at controlled rates. Since, your primary problem is prioritizing the backup to a lower value. – nik Jul 17 '09 at 05:10
2

If possible, use rsync. Most NAS provide rsync service nowadays, it's more secure than an smb share and it allows you to throttle bandwith precisely. Moreover, it will only transfer the differences betweeen your files, rather that transfer everything every time.

wazoox
  • 6,918
  • 4
  • 31
  • 63
  • 1
    Hi waz - thanks for the suggestion, but the NAS unfortunatley doesn't have an rsync server. i might be able to try a local rsync to the mounted directory though...it won't do any throttling but it will limit the volume of traffic which is even better! –  Jul 17 '09 at 06:17
0

Maybe not ideal, but perhaps you could split the tar into smaller file chunks and have your script iterate through them with a sleep at the bottom of the loop?

Adam Brand
  • 6,127
  • 2
  • 30
  • 40