-2

Possible Duplicate:
Copy large file from one Linux server to another

I need to transfer about ~3tb of data to another server. I am currently using rsync -z but it is going at 250kb/s so it will take forever. How can I speed it up?

Matthew Hui
  • 261
  • 8
  • 19

2 Answers2

9

Mail a hard drive to the server and have a technician plug it in.

joeforker
  • 2,399
  • 4
  • 26
  • 35
  • 1
    I agree. This is probably the fastest, most cost effective way of getting it done. – Justin Pearce Oct 03 '11 at 18:32
  • Yep, this is were the old pigeon beats the inter-tubes. It'll have to be a big heavy-duty pigeon, but still :-) This is also a good opportunity to keep an archive/backup on a bunch of hard disks, kept in a vault. – DutchUncle Oct 03 '11 at 18:37
  • 1
    http://en.wikipedia.org/wiki/IP_over_Avian_Carriers where you have much larger packet sizes. Oh, and UDP. – cjc Oct 03 '11 at 18:42
  • +1 "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." -Andrew Tanenbaum – Mike1980 Oct 03 '11 at 19:33
-1

You can try to split data into let's say 3000 pieces using split command and transfer it using netcat (nc or netcat command) over UDP. It's usually faster than TCP. Then you can rsync them to make sure it all gets transferred correctly.

Michał Šrajer
  • 856
  • 5
  • 11
  • Academic answer - correct but totally useless. It wont magially make thigns 100 times faster and that is what is needed here. A mere 25% faster upload wont solve the problem. – TomTom Oct 03 '11 at 18:57