16

I need to transfer files from one CentOS server to another. Will transfer 5MB files about every 10 minutes. Do not need encryption.

What is an easy was for fast transfer of files?

Is there something simpler than ftp?

Thanks!

Franklin Piat
  • 806
  • 8
  • 24
Alex L
  • 591
  • 2
  • 5
  • 12

10 Answers10

25

rsync

I'd use rsync before I used ftp or tftp.

More options and (in my experience) more reliable transfer.

KPWINC
  • 11,394
  • 3
  • 37
  • 45
  • 1
    I've also found that rsync typically gets higher throughput than anything else (scp, cifs, nfs) – Ophidian Jul 10 '09 at 19:30
  • what about http transfers? – Alex L Jul 14 '09 at 02:58
  • @Ophidian Do you mean using rsync as daemon? Otherwise how can it be faster than scp, since both uses ssh and there is encryption. – balki Feb 15 '11 at 09:07
  • @balki Yes, the rsync daemon. It isn't particularly chatty, does a good job of feeding data off the disk onto the line, and it does as little work as necessary to complete the request (applies diffs for text files for instance). – Ophidian Feb 15 '11 at 13:48
22

tar over ssh is okay, but tar over TCP via netcat is about as low-overhead as you can get! If this is a one-time thing, give this a shot:

On the receiver:

nc -l -p 8989 | tar x

On the sender:

tar cf - /source-path | nc (receiving host ip address) 8989

If this is something you're going to do regularly, I'd probably use rsync.

Evan Anderson
  • 141,881
  • 20
  • 196
  • 331
  • +1 for netcat, the swiss-army knife – chmeee Jul 10 '09 at 20:14
  • Same on you for not reading Evan. Ha ha ha! Its actually not a one time thing. He said he's going to be transferring 5MB files about every 10 minutes. Perhaps sending via Morse code would be a good alternative? ;-) (note: private joke between Evan and myself) – KPWINC Jul 10 '09 at 21:12
9

Two people have mentioned tar over ssh, but didn't say how to do it. For the record, the basic procedure is to run:

tar cf - files... | ssh remotehost 'cd /destination && tar xvf -'

Or, if you want to start transfers from the receiving end:

ssh remotehost 'cd /source && tar cf - files' | tar xvf -

The advantage of doing it this way over Evan's netcat solution is that the whole thing can be started from one computer; you don't have to coordinate two netcat invocations. If you need this to run automatically, you can set up an ssh key that lets you make connections without a passphrase, and use that key for these connections.

ssh has a -C option to compress its data stream, or you can use GNU tar's builtin compressing ability:

tar zcf - files... | ssh remotehost 'cd /destination && tar xzvf -'

Rsync is another option, but its strong suit is in updating files that already exist on the receiving end. I've found it to be slower than scp or tar/ssh when using it to transfer files that don't already exist on the other end.

Kenster
  • 2,152
  • 16
  • 16
  • 1
    +1 What, you mean not everybody intuitively knows how to do tar over ssh? Weird. :) – chaos Jul 10 '09 at 20:14
  • tar, by itself, is not reliable - there is no integrity checking however with SSH (TLS) you get integrity due to TLS's ability to detect in flight changes to data. Rsync is a better choice and Rsync as it will do better integrity checking without encryption; the OP stated encryption was not necessary. – Kilo Jul 11 '09 at 02:38
  • What integrity checking does tar need? The TCP and ssh layers provide reliable data transport. If you're asserting that tar itself can have bugs, you have to treat rsync the same way. I have in fact had rsync transfers freeze on me due to protocol issues. I don't recall a tar/untar pipeline ever doing that. – Kenster Jul 11 '09 at 12:06
6

I'd use scp or tar over ssh, honestly. The encryption does slow things down, but the ease of setup and use, reliability, and (subjectively, of course) familiarity make me willing to take the hit, unless I really need that speed.

You can speed up the ssh transfer by telling it to use a faster cipher than the default, also. The default is usually 3des and you can usually do -c des, so that will obviously be faster, and -c blowfish is represented as fast as well, though I haven't tested it exactingly.

(Back in the days of SSHv1, you could often do -c none, but I guess somebody decided that was bad juju.)

chaos
  • 7,483
  • 4
  • 34
  • 49
4

If you have to go through scp/ssh, my experiments show that the fastest cipher enabled by default these days is RC4. You specify the cipher via '-c arcfour' in your ssh/scp command:

for initial copy:

  • scp -c arcfour -r foo/ desthost:/destdir

for updates:

  • rsync -e 'ssh -c arcfour' -r foo/ desthost:/destdir
allaryin
  • 323
  • 4
  • 10
3

Rsync is a good way to go because if you find yourself transferring the same files more than once it will speed up the copy, as shown with this quote from the man page.

   rsync is a program that behaves in much the same way that rcp does, but
   has many more options and uses  the  rsync  remote-update  protocol  to
   greatly  speed  up  file  transfers  when the destination file is being
   updated.
   The rsync remote-update protocol allows rsync to transfer just the dif-
   ferences between two sets of files across the network connection, using
   an efficient  checksum-search  algorithm  described  in  the  technical
   report that accompanies this package.
thepocketwade
  • 1,545
  • 5
  • 17
  • 27
2

FTP is pretty simple, but an even simpler way may be to create an NFS share on one machine and mount it on the other. Then copying the files will consist of doing a cp from one directory to another.

Swoogan
  • 2,087
  • 1
  • 14
  • 21
  • depending on the requirements. I wouldn't use NFS on the Internet for instance. – Kyle Jul 10 '09 at 17:46
  • 1
    Good point. In that case I'd recommend rsync as it can resume where left off if interrupted. Also because it only transfers the delta between the source and destination. – Swoogan Jul 10 '09 at 17:49
  • the question was very general, I like soulution posted by Swoogan, especially that author mentioned that need most simple solution and do not require encryption – integratorIT Mar 13 '15 at 11:45
2

If you want speed you could use netcat and tar. It will be faster than ssh, rsync, or scp on a local network where encryption is not a concern. Google "netcat tar".

DestinationServer

nc -l -p 7878 | tar -C /target/dir -xzf -

SourceServer

tar -cz /source/dir | nc DestinationServer 7878

This obviously requires that netcat is actually installed. Google "netcat tar" for more info.

rjmoggach
  • 1,013
  • 1
  • 8
  • 11
1

I believe that you already solved your problem but in case your ssh works on another port (not on standard port 22) you can use this

rsync -avz --rsh='ssh -pXXXXX' /local/dir/ root@192.168.1.2:/remote/dir

Note: - replace XXXXX with your port number - replace 192.16.1.2 with correct remote server IP

mangia
  • 589
  • 2
  • 6
-1

https://www.npmjs.org/package/gist-cli

https://github.com/settings/applications#personal-access-tokens

or this one:

https://github.com/defunkt/gist

Use gist command to upload and download

Gank
  • 101
  • 4