0

I'm investigating the performance of RDMA under different latency settings. To be more specific, I have built a testbed with 3 servers, and connected them in a line (like A-B-C) via fibers.

For TCP, I can use Linux tc tool in node B to add latency and packet loss. But, for RDMA, the transmission won't go through the OS kernel, which bypasses the tc.

I have searched in the Github with keywords like "RDMA latency simulation", and found nothing useful.

I am wondering is this possible for RDMA? if yes, how can I make it?

  • The answer probably depends on the specific RDMA NIC you use. If it is a RoCE device with virtualization support, perhaps you could redirect the RDMA packets to software and use tc similarly to how it was used for TCP. – haggai_e Jun 25 '23 at 14:15

0 Answers0