0

I need an ultrafast backups restore for my solution.

I am going to use two hosts: operating server and backup server.

Both servers will be connected directly via 40Gbe PCIe 4.0 8x ethernet adapters so that they can implement max bandwidth.

For operating server I chosen 24x SATA SSDs (22 active, 2 hot reserve) to be working in RAID 10. Potential total write throughput for selected drives and RAID 10 configuration is 50Gbe.

For backup server I chosen 24x SATA HDDs having 200+ MB/s sequential writing speed (22 active, 2 hot reserve) to be working in RAID 10 as well. Potential read throughput for selected drives and RAID 10 configuration is 45Gbe.

For now I am looking for RAID controller to be installed on both servers, which is able to implement 40gbe throughput from drives to ethernet card.

And I am stuck with it because there is almost no info about RAID Controllers throughput. Just numbers like 6GB/12GB/24GB per channel what is not explaining real max throughput via PCIe.

Can someone please suggest me some RAID Controller supporting 24 SATA drives, RAID 10 and providing 40Gbe throughput or can point to some article where I can get explanations about real speed of similar RAID controllers.

Thank you.

Mitchel
  • 33
  • 5

1 Answers1

1

A PCIe 4.0 x 8 connector can theoretically handle 16 GT/s and 15.754 GB/s, which equates to a worst-case of approximately 126Gbps, far exceeding your 40Gbps network speed. Looking at a controller, if it can feed 5GBps consistently then that could fill a 40Gbps network channel, and given most SSD-tuned controllers can handle that per channel, and presumably you'll have multiple channels, you should be fine from that perspective.

Also I'm so pleased to see you using R10, so many other users come here with arrays such as this using R5/R50, and then we see them later returning to ask for help rebuilding their data :)

There are a couple of areas that do concern me here though. First is NUMA, you don't mention what CPUs you plan to use but if you can I'd try to ensure the NIC/s and Controller/s are on the same NUMA node - you don't want to be keeping the QPI/UPI busy just transferring data all the time, it'd lower overall performance, especially latency. Second would be interrupt management, you're going to have to do a lot of this, I'd suggest buying good/expensive NICs as they'll have all manner of tuning and offloading capabilities to help you get as close as possible to your performance targets. If you either didn't tune or used a dumb NIC the cores handling interrupts would do nothing but handle them, I'm sure we're happy to help with this on this site but please search for previous questions and answers first as I'm sure lots of this ground has been covered here before. My final concern would be the use of 40Gbps at all - the reason is that for some implementations of '40G' you can find that under the covers it's really just 4 x 10Gbps links but bound into a hardware-controlled LAG/etherchannel, this can be limiting, in fact the worst case could limit you to 10Gbps of point to point bandwidth. Obviously in a situation where one server is talking to multiple clients this is 'smeared out' and kind of goes away, but in a point to point scenario I honestly don't know the exact behaviour and I wanted to let you know it was of concern to me. An alternative may be to use dual or quad-port 25Gbps NICs and bond them together within the OS, this way you would have control of these ports and their balancing - with the added bonus of potentially being able to get 50/100Gbps out of them - just a thought anyway.

One final note, and I'm not going to beat you up about this, is that this site specifically rules out product selection recommendation questions - for reasons explained in our help pages, but I'll let it slide this time as it was a well-written question :) - also I happen to like broadcom, adaptec and intel, but others may prefer LSI and Megaraid - to be honest they're much-of-a-muchness these days.

Good luck.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • Didn't Broadcom buy LSI a few years ago? – Andrew Henle Jul 05 '22 at 16:25
  • Yes I'm sure you're right, it's hard remembering who bought what :) – Chopper3 Jul 05 '22 at 16:31
  • Thank you very much for so detailed explanation. I really appreciate it and I did not expect I get such extended answer. Special thanks for comments related NUMA and ideas about NIC. Probably you saved me from lots of potential issues. My concern is about RAID controller performance itself. E.g. if I'll getsome controller which is PCIe 4.0 x8 and is designed for 24 channels. Can I expect that controller being in RAID10 can utilize whole throughput potentially provided by the array of drives? As I understood 6/12/24Gb says nothing but max bandwidth to the each connected drive/extender. – Mitchel Jul 06 '22 at 09:15
  • Additional question is about RAID extenders. I never had deal with RAID extenders before. What do you think, can I expect enough performance for my planned configuration If I'll decide use some extenders together with 16i or 8i RAID controllers? It is much easier find pcie 4 8x 16i or 8i controller on the mrket. And one more question is it not bad practice to use extenders of brands different from RAID controller's? Thank you. – Mitchel Jul 06 '22 at 09:23
  • "Can I expect that controller being in RAID10 can utilize whole throughput potentially provided by the array of drives?" - there's no real way to predict sorry, you're going to have to try it and see how you get on, but realistically I think there's a reasonable chance you'll get at least close to 40Gbps of throughput. That said, and this answers your other question too, I have little experience of SATA other than on my personal PC's - everything I've done for literally decades has been SCSI/SAS ore FC, and NVMe/NVMeF in recent years - I know they can add latency though. – Chopper3 Jul 06 '22 at 09:39
  • I believe that is it. Thank you! – Mitchel Jul 06 '22 at 10:36