0

Trying to copy a few TBs betweek Solaris 10 u9 systems A single scp only seems to be able to transfer around 120MB/min, over a 1GB network. If I run multiple scp copies, each one will do 120MB/min, so it is not the network as far as I can see.

Any hints on how to tweak the Solaris settings to open a bigger pipe. Have the same problem with another piece of software that unfortunately does not seem to be able to be split into separate processes.

1 Answers1

1

This sounds like a CPU bottleneck. check the output of the "top" command while running scp. Try using different ciphers, like "scp -c blowfish-cbc ...". Some ciphers,like blowfish are less CPU-intensive than others, at the cost of possibly weaker encryption. You can also try "-C" (capital) to compress the data. To combine, use "scp -c blowfish-cbc -C ...". check the ssh man page for other ciphers to use. Other options include rsync over ssh.

edgester
  • 583
  • 1
  • 5
  • 15
  • each ssh session is using 0.3 to 0.4% I suspect Solaris is either limiting processing time/cpu usage per process, or limiting network usage per process. – user133080 Aug 22 '12 at 20:25
  • A 0.4% load might still characterize a CPU bound process depending on the server hardware. – jlliagre Aug 22 '12 at 21:52
  • The only ciphers apparently available are aes128-ctr and cbc. Also, to clarify, we're using the Solaris pax utility of ssh for these copies, although an scp shows the same cpu util. The source hardware is a Sun blade T6300, the destination is a Dell R610 – user133080 Aug 23 '12 at 14:52