SSHFS tests for server I/O latency using dd
returns very surprising results, so much so that I'm worried this might be some problem either with the test method or configuration.
Test 1 on local RAID 10 disk using dd
(512 bytes written one
thousand times)
dd if=/dev/zero of=/root/testfile bs=512 count=1000 oflag=dsync
Output
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 3.34273 s, 153 kB/s
Needless to say, very disappointing results for Test 1.
Test 2 on mounted (sshfs -o reconnect -o nonempty -o allow_other -o ServerAliveInterval=15 -o cache=yes -o kernel_cache -o Ciphers=arcfour
) RAID 1 disk using dd
(512 bytes written one
thousand times)
dd if=/dev/zero of=/mnt/nas/testfile bs=512 count=1000 oflag=dsync
Output
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.0498811 s, 10.3 MB/s
Very surprising results for Test 2 considering I was averaging only 400 kB/s with NFS .
Control Data Linux I/O performance test using dd
Server with RAID 10: In this example, the test data was written to an empty partition. The test system was an 2HE Intel Dual-CPU SC823 Server with six 147 GB SAS Fujitsu MBA3147RC (15,000 rpm) hard disks and an Adaptec 5805 RAID controller with the cache activated and a BBU.
test-sles10sp2:~ # dd if=/dev/zero of=/root/testfile bs=512 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
512000 bytes (512 kB) copied, 0.083902 seconds, 6.1 MB/s
EDIT: Test results without dsync
Local RAID 10: 512000 bytes (512 kB) copied, 0.00283095 s, 181 MB/s
SSHFS RAID 1: 512000 bytes (512 kB) copied, 0.0557114 s, 9.2 MB/s
Question: Why is I/O latency so low for SSHFS? Does this mean it is more suitable for caching solutions with large number of small reads/writes compared to other NASs like NFS/CIFS?