I have three disks making up a RAID-Z vdev in a zfs pool on Ubuntu Server 16.04.2 which are connected via a cheap PCIE SATA card, a single eSATA cable, and a port multiplier at the other end.
iostat
shows these disks are performing extremely poorly as per the below:
But I'm struggling to understand why. Both the controller (a Syba SI-PEX40064) and the port multiplier (an unbranded one with a SiI3726 chipset) support port multiplication and FIS.
If it was a single disk failing I would expect the wait time to be slow only on a single disk, not all three attached via the port multiplier.
These disks are relatively newly installed in this configuration (2-3 weeks) and this issue has only occurred in the last few hours despite constant use of the pool. I'm not sure how ZFS works, I suppose it's possible it wasn't writing to those disks until now?
Any suggestions on what to investigate, or potential factors that may result in this would be greatly appreciated!
DD speed test
root@server:~# dd if=/dev/sdi of=/dev/null bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.00211904 s, 4.9 GB/s
root@server:~# dd if=/dev/sdi of=/dev/null bs=1M count=10 iflag=direct
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 12.5821 s, 833 kB/s
root@server:~# dd if=/dev/sdj of=/dev/null bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.00196007 s, 5.3 GB/s
root@server:~# dd if=/dev/sdj of=/dev/null bs=1M count=10 iflag=direct
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 11.6849 s, 897 kB/s
root@server:~# dd if=/dev/sdk of=/dev/null bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 40.8416 s, 257 kB/s
root@server:~# dd if=/dev/sdk of=/dev/null bs=1M count=10 iflag=direct
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 6.79282 s, 1.5 MB/s