0

I am creating a backup software in c# for my organizations. I have a problem with time to do a backup of my workstation to a shared folder on a server. If I directly compress the files to the shared folder with temp file created direct to the shared folder, the time to compress is 3 minutes, but if I set the temp dir on the workstation, the compress time is 2 minutes.

I test this job with another backup program and the backup process with temp file created direct to the shared folder is 2 minutes.

What is wrong with dotnetzip?

cchapman
  • 3,269
  • 10
  • 50
  • 68
Lo.
  • 1
  • 2

1 Answers1

1

Without seeing any code, I would imagine that it is trying to stream the output binary file to the server backup location. The result of this is that every byte that gets wrote needs to be confirmed by the client / server relationship.

When you write it to your local system however, then move it to the server location, you are performing a single transfer, opposed to individual read / write operations for each segment of the file being wrote by the stream.

Its kinda similar to how contiguous file operations are faster on Sata Drives. If pasting or Copying a 3GB file, you can attain really high speeds. If pasting 3000 files that are 1kb each, your write speed won't actually be that fast because its treated as 3000 operations vs the single operation that can go at full speed.

Do you know if the other backup programs save the backup locally before moving? I would imagine that they construct a temp file which is then moved server side.

Baaleos
  • 1,703
  • 12
  • 22
  • Sorry for may bad english I had the same idea about sending in the file segments. Other software running the direct transfer without saving the file locally – Lo. Feb 18 '16 at 14:47