3

What is the best way/tools to move files to a remote Ubuntu web server? Obviously a newbie question, but should I create a smb mount, use ftp, or ssh?

brian-brazil
  • 3,952
  • 1
  • 22
  • 16
Christopher Altman
  • 769
  • 4
  • 12
  • 20

5 Answers5

7

rsync is a good choice. It normally uses SSH, is smart about only copying what is needed and best of all if it gets interrupted you can simply restart it.

brian-brazil
  • 3,952
  • 1
  • 22
  • 16
4

scp is good if you want to copy a few files. rsync is good if you want to repeatedly copy the same directory to places. If however you're trying to copy a large amount of data between two machines, the only way to do it is using netcat.

On the receiving machine, run:

# cd /dest/dir && nc -l -p 12345 | tar -xf -

On the sending machine you can now run:

# cd /src/dr && tar -xf - | nc -q 0 remote-server 12345

You should find that everything works nicely, and a lot quicker. If bandwidth is more constrained than CPU, then you can add "z" or "j" to the tar options ("tar -xzf -" etc) to compress the data before it sends it over the network. If you're on gigabit, I wouldn't bother with the compression. If it dies, you'll have to start from the beginning, but then you might find you can get away with using rsync if you've copied enough.

It's worth pointing out that this does not have the security that scp or rsync-over-ssh has, so make sure you trust the end points and everything inbetween if you don't want anyone else to see the data.

Why not use scp? because it's incredibly slow in comparison. God knows what scp is doing, but it doesn't copy data at wire speed. It aint the encyption and decryption, because that'd just use CPU and when I've done it it's hasn't been CPU bound. I can only assume that the scp process has a lot of handshaking and ssh protocol overhead.

Why not rsync? Rsync doesn't really buy you that much on the first copy. It's only the subsequent runs where rsync really shines. However, rsync requires the source to send a complete file list to the destination before it starts copying any data. If you've got a filesystem with a large number of files, that's an awfully large overhead, especially as the destination host has to hold it in memory.

David Pashley
  • 23,497
  • 2
  • 46
  • 73
3

scp,rsync over ssh are all good but for large data sets (ie lots of data in lots of small files or even just lots of large files) i use tar over ssh without compression doing the following:

 cd /path/to/localdir && tar -cf - ./| ssh -o “Compression=off” someuser@someotherserver “tar -xf - -C /path/to/remote/dir”

What this does is tar will create a data stream thats just sent over ssh encrypted but no compression. if your pushing this over an internet link you would be better off leaving the compression on by omitting the -o option.

For more info on this see here

Brendan
  • 934
  • 6
  • 5
2

If you want to move files from a Windows desktop to a remote Ubuntu server, you can try WinSCP.

WinSCP is an open source free SFTP client and FTP client for Windows. Legacy SCP protocol is also supported. Its main function is safe copying of files between a local and a remote computer.

Make sure ssh is installed on the Ubuntu server.

sudo apt-get install ssh

Jindrich
  • 4,968
  • 8
  • 30
  • 42
0

The best way to move files to a remote system would be scp.

A recursive directory copy

scp -r locadir user@hostname.com:~/

A single file copy

scp localfile user@hostname.com:~/

If you want to copy a file FROM a remote server

scp user@hostname.com:~/localfile .

scp is part of the ssh suite, and should be available if you have ssh and sshd on the remote server.

Joseph Kern
  • 9,899
  • 4
  • 32
  • 56