1

My current setup is this:

  • Megaraid 9361-16i
  • 12 x 12 TB SATA disks WD DC HC520
  • Connected to the controller via 3 of the 4 SAS ports
  • Configured either Raid-10 or Raid-6 without spares. Raid-6 in write-back mode
  • Formatted with ext4
  • Ubuntu 22.04

My benchmarking application uses sequential direct IO to a single file. I tested it with up to 8 concurrent writes of 4-6 MiB each until the file size reaches multiple GiB. On an SSD it can easily reach 2 GiB/s or more.

The measured throughput is only around 150 MiB/s for the Raid-10 and 320 MiB/s for the Raid-6. From their data sheet these disks should have a sustained sequential throughput of 243 MiB/s. So shouldn't I get closer to 2.3 GiB/s for the Raid-6?

So right now I'm wondering what I'm doing wrong or where the bottleneck is and how I upgrade the server to solve it. The easiest upgrade path would be to replace the SATA disks with equivalent SAS disks. Would this solve my issue?

Homer512
  • 113
  • 3
  • Try using `dd` to write directly to the block device: `dd if=/dev/zero of=/dev/sdb1 bs=1024k flags=direct`. You'll have to rebuild your filesystem afterwards... – Andrew Henle Mar 21 '23 at 11:08
  • *I tested it with up to 8 concurrent writes of 4-6 MiB* How are you doing that? I doubt ext4 handles simultaneous write operations well, and simultaneous write operations to the same spinning disk(s) tend to *hurt* performance. – Andrew Henle Mar 21 '23 at 11:13
  • @AndrewHenle With libaio. As I understand it, I did basiscally the same as `fio --ioengine=libaio --iodepth=8 --direct=1` similar to what is suggested here for testing sequential speed. However, I will try fio with those exact parameters as a comparison – Homer512 Mar 21 '23 at 11:35
  • 1
    Missed posting the link in an edit: https://linuxreviews.org/HOWTO_Test_Disk_I/O_Performance – Homer512 Mar 21 '23 at 12:04

1 Answers1

2

I don't expect switching to SAS HDDs to have any meaningful effect on your write speed. Rather, try to increase your stripe element size (good starting values are 256K/512K for RAID10 and 64K for RAID5/6). Also, for testing, you can try to enable the physical disks DRAM cache (but be sure to understand its implications for data safety - which is controller dependent - before putting this setup into production).

That said, real world workloads are rarely limited by sequential reads/writes, while tend to be much more sensitive to random IOPs - and HDDs are notoriously slow at small random operations.

EDIT: if for some reason you can't obtain high performance from your RAID controller, try setting it in pass-through mode (ie: no RAID at all) and configuring a ZFS RAIDZ2 dev or equivalent MDRAID 12-drive RAID6 array.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • Thanks. Stripe size for Raid6 was I believe 128 kiB or 256 kiB but I will try different ones; and the cache. However, I know that I only need sequential write speed. I'm writing the application and all it needs to do in the end is copy 55 GiB large files from the SSD to the HDD raid. – Homer512 Mar 18 '23 at 19:24
  • @Homer512 if you only requires high sustained sequential writes, give ZFS a try (I edited my answer). – shodanshok Mar 18 '23 at 19:49
  • *if for some reason you can't obtain high performance from your RAID controller* IMO the various array configurations also need to be tested without going through the filesystem, especially RAID6. A 10+2 RAID6 array with large stripe size is begging for read-modify-write bottlenecks for any kind of writes, but the reported slow performance has me wondering what else the problem could be. I'd think "I tested it with up to 8 concurrent writes of 4-6 MiB ..." needs some investigation, especially if OP is only creating one block device out of all those disks, and then using ext4. – Andrew Henle Mar 21 '23 at 11:15
  • Welp, that is embarrassing. The benchmark reported words per second, not bytes; underestimating performance by a factor of 4. So performance is as it should be. I mark your answer as solved since independent from circumstances I wondered whether SAS has a benefit over SATA in such a RAID and you clarified that. – Homer512 Mar 22 '23 at 19:28