1

We're in the process of migrating ~500gb of data from our end-of-life, in-house server to a new Citrix-based hosted DFS.

Our provider has been citing issues for some time trying to sync the data, saying that everytime a user changes a file, the sync has to be done again.

We've migrated the bulk of the data but we are struggling with our "Client" folders where most of our work is being done. These contain the usual office files, PDFs etc. but also files from applications such as Sage (accounting software).

I'm currently running a DirStat scan on one of our mapped drives and it's ~30% completed and we have over 500,000 files from 1kb up to maybe 250mb max (many, many small ones, particularly for Sage).

If you extrapolate this would suggest over 1million which is a lot, but we are a very small business compared to others so we can't be the only company having this issue.

My question: is this a recognised problem when migrating from in-house to cloud using a mirrored sync, or are we missing something?

Sorry I don't have more specifics - I'm just our in-house IT guy relaying between the provider and the rest of the company, so my terminology is likely off.

My understanding is for every single file, our new cloud server has to connect to our existing server, copy the file, close the connection, move on to the next file. I can see this being very time-consuming, but really I don't know what else we can do to speed up the process?

Thanks for any advice.

DirStat

Daniele Santi
  • 2,529
  • 1
  • 25
  • 22
user506410
  • 11
  • 1
  • 1
    There should be relatively few changes to the data in the sense that ALL of the files aren't changing, right? So they need to do an initial copy and then a final "cutover" copy during a maintenance window. If they're not using a tool that can do this then that's likely the cause of the slow down. If they have to do a "full" copy every time a file or files change then it's never going to complete. – joeqwerty Jan 22 '19 at 14:04
  • 1
    If that's what they're doing, I would begin to doubt their competence. – Michael Hampton Jan 22 '19 at 14:23
  • How much time? What tool is used (Robocopy/Rsync/etc)? What is the latency between your location and their location? – Greg Askew Jan 22 '19 at 14:29
  • Thanks for the replies - RE: "cutover" - I believe this is what they are doing, but they are citing that there are a lot of files being updated and it's causing sync operations to fail. Apparently, they've split our data into like 5 "sections" and are running sync jobs on each one. RE: tool/latency I would have to ask to find out unless there's a way I can check remotely. I am told it took over 3 hours to sync a 3gb folder with maybe 30,000 files inside it? – user506410 Jan 22 '19 at 14:50

0 Answers0