1

SSHFS tests for server I/O latency using dd returns very surprising results, so much so that I'm worried this might be some problem either with the test method or configuration.

Test 1 on local RAID 10 disk using dd (512 bytes written one thousand times)

dd if=/dev/zero of=/root/testfile bs=512 count=1000 oflag=dsync

Output

1000+0 records in 1000+0 records out 512000 bytes (512 kB) copied, 3.34273 s, 153 kB/s

Needless to say, very disappointing results for Test 1.


Test 2 on mounted (sshfs -o reconnect -o nonempty -o allow_other -o ServerAliveInterval=15 -o cache=yes -o kernel_cache -o Ciphers=arcfour) RAID 1 disk using dd (512 bytes written one thousand times)

dd if=/dev/zero of=/mnt/nas/testfile bs=512 count=1000 oflag=dsync

Output

1000+0 records in 1000+0 records out 512000 bytes (512 kB) copied, 0.0498811 s, 10.3 MB/s

Very surprising results for Test 2 considering I was averaging only 400 kB/s with NFS .


Control Data Linux I/O performance test using dd

Server with RAID 10: In this example, the test data was written to an empty partition. The test system was an 2HE Intel Dual-CPU SC823 Server with six 147 GB SAS Fujitsu MBA3147RC (15,000 rpm) hard disks and an Adaptec 5805 RAID controller with the cache activated and a BBU.

test-sles10sp2:~ # dd if=/dev/zero of=/root/testfile bs=512 count=1000 oflag=dsync 1000+0 records in 1000+0 records out 512000 bytes (512 kB) copied, 0.083902 seconds, 6.1 MB/s


EDIT: Test results without dsync

Local RAID 10: 512000 bytes (512 kB) copied, 0.00283095 s, 181 MB/s

SSHFS RAID 1: 512000 bytes (512 kB) copied, 0.0557114 s, 9.2 MB/s


Question: Why is I/O latency so low for SSHFS? Does this mean it is more suitable for caching solutions with large number of small reads/writes compared to other NASs like NFS/CIFS?

Pavin Joseph
  • 130
  • 10
  • SSHFS has compression enabled by default. /dev/zero is not really a very good test unless compression is disabled with `-o Compression=no`. – Dima Chubarov May 05 '16 at 06:49

1 Answers1

1

I strongly suspect that oflag=dsync is the reason causing this - dd executed on host, obeys this flag, sshfs instead doesn't pass this to the server, therefore its making use of caching methods.

Usually NFS should be one of the fastest options to access remote storage. Compared to SSHFS your data doesn't have to went trough encryption and the FUSE stack.

Henrik
  • 698
  • 5
  • 19
  • Yes, this does increase performance on local RAID 10 to good levels (181 MB/s) but the SSHFS performance still blows my mind (9.2 MB/s) – Pavin Joseph May 04 '16 at 12:44
  • Did you expect more or less sshfs performance? – Henrik May 04 '16 at 12:52
  • Obviously less, considering NFS was giving me around 400 kB/s and SSHFS with all that encryption overhead is consistently much better on latency and throughput. – Pavin Joseph May 04 '16 at 12:53
  • 1
    Did you use async with nfs? Otherwise you'll have quite the same situation as writing with dd and dsync – Henrik May 04 '16 at 12:56
  • I did not use async, only sync with NFS. The connection of my test machine is 100 Mbps, so the 9.2-10.3 MB/s SSHFS speed is right up there near the practical limit – Pavin Joseph May 04 '16 at 13:10
  • Thats the reason for your poor nfs perfomance... – Henrik May 04 '16 at 13:12
  • I will post the NFS results with async soon. In my case, as the NAS is outside local network NFS is not really an option anyway. – Pavin Joseph May 04 '16 at 13:16