I am migrating to a new server and, as well as html/php files, have a directory which contains around 90,000 files totalling 24GB in size which needs to be moved.
When I did a test migration, I used tar to create a tarball then wget on the new host and then extracted the tar file but, whilst it worked fine, it took around 3 hours all in to complete. That would 3 hours of downtime whilst I did this on the actual migration to ensure no new files come in or files were changed etc.
I am now planning the actual migration and am trying to find quicker ways of doing this and wondered about using rsync - I have only ever used rsync locally and only for a small number of files so would running rsync against 90k items be quicker than the above method?
I am not so worried about CPU, memory or network usage as long as the actual process completes quicker as this will be done out of hours when the system is quieter anyway.