A PCIe 4.0 x 8 connector can theoretically handle 16 GT/s and 15.754 GB/s, which equates to a worst-case of approximately 126Gbps, far exceeding your 40Gbps network speed. Looking at a controller, if it can feed 5GBps consistently then that could fill a 40Gbps network channel, and given most SSD-tuned controllers can handle that per channel, and presumably you'll have multiple channels, you should be fine from that perspective.
Also I'm so pleased to see you using R10, so many other users come here with arrays such as this using R5/R50, and then we see them later returning to ask for help rebuilding their data :)
There are a couple of areas that do concern me here though. First is NUMA, you don't mention what CPUs you plan to use but if you can I'd try to ensure the NIC/s and Controller/s are on the same NUMA node - you don't want to be keeping the QPI/UPI busy just transferring data all the time, it'd lower overall performance, especially latency. Second would be interrupt management, you're going to have to do a lot of this, I'd suggest buying good/expensive NICs as they'll have all manner of tuning and offloading capabilities to help you get as close as possible to your performance targets. If you either didn't tune or used a dumb NIC the cores handling interrupts would do nothing but handle them, I'm sure we're happy to help with this on this site but please search for previous questions and answers first as I'm sure lots of this ground has been covered here before. My final concern would be the use of 40Gbps at all - the reason is that for some implementations of '40G' you can find that under the covers it's really just 4 x 10Gbps links but bound into a hardware-controlled LAG/etherchannel, this can be limiting, in fact the worst case could limit you to 10Gbps of point to point bandwidth. Obviously in a situation where one server is talking to multiple clients this is 'smeared out' and kind of goes away, but in a point to point scenario I honestly don't know the exact behaviour and I wanted to let you know it was of concern to me. An alternative may be to use dual or quad-port 25Gbps NICs and bond them together within the OS, this way you would have control of these ports and their balancing - with the added bonus of potentially being able to get 50/100Gbps out of them - just a thought anyway.
One final note, and I'm not going to beat you up about this, is that this site specifically rules out product selection recommendation questions - for reasons explained in our help pages, but I'll let it slide this time as it was a well-written question :) - also I happen to like broadcom, adaptec and intel, but others may prefer LSI and Megaraid - to be honest they're much-of-a-muchness these days.
Good luck.