I'm investigating the performance of RDMA under different latency settings. To be more specific, I have built a testbed with 3 servers, and connected them in a line (like A-B-C) via fibers.
For TCP, I can use Linux tc tool in node B to add latency and packet loss. But, for RDMA, the transmission won't go through the OS kernel, which bypasses the tc.
I have searched in the Github with keywords like "RDMA latency simulation", and found nothing useful.
I am wondering is this possible for RDMA? if yes, how can I make it?