0

I have finicky home internet, rsync might get disconnected anytime and my large files will have to restart from the beginning.

Is there a way, to split per say a one GB file into 50MB increments, send them over to the destination and then combine them? That way even if the file is cut off, I will have some percentage of it saved at the other end.

  • See this thread https://unix.stackexchange.com/questions/48298/can-rsync-resume-after-being-interrupted or search for rsync partial. – Francisco1844 Jun 14 '22 at 13:40

2 Answers2

0

This person already gave a great explanation: https://unix.stackexchange.com/a/165417/446381

So if I want to achieve my goal, I am to use the --append-verify flag with rsync.

The --partial does seem a bit redundant for me. It transfers as hidden files and later, renamed to be the partial file. Which I don't like, because a control-C might disrupt this renaming process.

  • I think you are mistaken wrt `--partial`. In my experience it handles interruptions with ctrl/C or other events perfectly fine, continuing at the next run where it left off when it was interrupted. The only thing it cannot of course handle gracefully is `kill -9`, but nobody in their right mind does that anyway. – Tilman Schmidt Jun 14 '22 at 16:26
  • @TilmanSchmidt Does that mean I should add `--partial` along side `--append-verify` for the best results when interruptions or Ctrl-C happens? –  Jun 14 '22 at 17:42
  • In my experience `--partial` alone is enough. No need for `--append-verify`. My scripted transfers are quite often interrupted by `timeout` (analogously to ctrl/C) or by network outages, and every time the following `rsync --partial` run quite reliably resumes where the previous one left off. – Tilman Schmidt Jun 20 '22 at 16:46
0

Short version: To make rsync better at restarting where it left off, add the --inplace flag. No need to split up large files.

Longer version:

Breaking up the file into smaller ones, transferring each, is a good idea. However you're working too hard. Internally rsync breaks up files into 64k chunks... kind of. There is a way to make rsync do what you want: add the --inplace flag.

Let's look at the larger problem you're having: Your internet connection is unreliable and you need a way to get large files sent.

If you use --inplace flag (which implies --partial) you'll get the desired result. Every time a transfer gets interrupted, rsync will leave the files in a state that makes it efficient to continue where it left off the next time you run (the same) rsync.

Just use --inplace and run the rsync command multiple times until everything gets copied.

If you are very paranoid, once all the files have copied successfully do one more pass adding the --checksum (-c) flag. This will do a very slow byte-by-byte re-check instead of using the file's timestamp to know which files can be skipped. Since all the files were copied properly already, it shouldn't find any more work to do. That said, I do this sometimes just to have peace of mind. You don't want to use this flag during the initial runs because it will be very slow and wasteful as it will re-read every block of every file.

TomOnTime
  • 7,945
  • 6
  • 32
  • 52