If I have a client-server architecture, is the up delay (from the client to the server) always the same as the down delay (from the server to the client)? On a case-by-case basis of latency measurement, of course I would expect there to be minor differences. However, when averaged over large latency samples, I would expect that the two delays are identical or very nearly so.
However, I suppose it is possible for a system to use one route between the client and server for sending data, and another route for receiving data from the server to the client.
Does this make sense? Is it really possible that a client-server based system can have, for example, a larger latency from the client to the server than from the server to the client? If so, what about visa versa?