I have a home cloud server whose data directory contains about 400GB of data.
I've tried to do a full duplicity backup on the directory. It's been running for 17 hours and only about 19 GB of data (compared to the 368 GB of the previous backup) have been generated. That's a speed of about 310 KB / s (!)
Server :
- cheap tower from a few years ago
- Debian 9
- duplicity from packets (0.7.11)
- Intel J4205 @1.5Ghz, 1MB cache
- cheap hard drive but still capable of 50+ MB/s
Backup destination :
- Synology disk station, also capable of 50+ MB/s (much more as a matter of fact)
- connected via ethernet (in fact, I can test a file copy independently which runs generally around 100MB/s)
- A backup passphrase is used, but encryption parameters are kept to default
The CPU seems stuck at 100% on a single duplicity process.
Is this expected? I can understand that my hardware is cheap and a bit old, but is it supposed to be SO slow? I would ideally like a speedup of at least 40-100x by upgrading, how can I even be sure that I can obtain this kind of performance improvement, and what should I even aim for?
PS: it seems that duplicity uses gnuPG which by default uses SHA-256. Still, if it's so slow on a 2016 CPU, it means that you can't even browse the web on this PC?