0

I have a tricky problem with a backup strategy. The problem is with disk performance and so far I cannot do a lot about it (as well as meddle with, let's call it, pipeline design), so I was wondering whether there is a different tool-related approach that could cut the time down (to give me some breath until a proper solution can be introduced).

The goal: Create a file out of an LVM snapshot, compress it and send to a remote storage.

The problem: Disk performance is poor (and so far cannot be changed). The partition in about 120 GB and with avg processing performance of 30 MBps file creation takes about an hour. I use dd if=snapshot of=snapshot_file to create the file, but am happy to change the tool. Piping the file creation to compressor and sender (dd | compress | send) does not change a lot, since the bottleneck is still disk performance. Tried experimenting with bs parameter of dd, but to no avail.

The question: How to keep to the same pipeline (make snapshot, create file, send it), but make it run as fast as possible?

Any ideas will be appreciated, thanks!

theo
  • 25
  • 8

1 Answers1

0

If the disk is slow; there isn't much you can do to speed it up if you are already compressing on the fly and sending it over pipe.

You could look at pigz, which will speed up the compress stage; but it's still has to read it from disk. Piping it to your send will keep it from hitting the disk again as a write; but the read is what the read is.

If the server you are sending it to is faster, maybe send the content first and then compress it on the other side?

Ryan Gibbons
  • 998
  • 9
  • 20
  • I am happy to even go as far as to skip the compression part entirely, so this is not an issue. On the other hand I know that `dd` might not be perfect in terms of performance, so maybe I can improve in this area? – theo May 08 '19 at 13:41