I have 2 servers sitting in a rack running Ubuntu 16.04 with a 1 meter Ethernet cable between them, both having standard Intel Ethernet adapters.
The ping
between the two is about 300 us
(microseconds).
This is a standard latency I've seen in most Gigabit-Ethernet setups.
But this latency still seems quite high compared to theoretical limits; why is it? I have read that 1 GbE can achieve 40 us latency.
Is this the minimum latency I can expect, or is there software tuning I can perform to reduce this latency? What is the bottleneck? Is it Linux? On this gamer web site for Windows the tool in the screenshot seems to suggest 40 us latency in most cases, but that doesn't help me much for my Linux servers.
(How) can I make my ping
40 us?
EDIT: Looking at the screenshot again, it might be that the 40 us shown are not actually roundtrip time, but it is actually a specific delay within the Windows kernel and thus the 40 us may only be part of the total roundtrip time, which may be higher and is not listed. This would also be in line with the answers in here.
(I originally asked this question at superuser; at this time it wasn't clear to me that ServerFault would be a more appropriate community to ask network performance questions, and I don't have enough reputation there to move the question, so I reposed here. I have also switched the hardware to server hardware.)