I am trying to get a high level understanding of TCP and have come to a confusing point.
Let's say we have a server S and a client C which are connected.
If S pushes a message to C and before C realizes there was a message sent it also pushes a message towards S.
Now were are in a state where S is waiting for it's message ACK and C is also waiting for it's ACK.
How does the specification avoid this kind of deadlock? A lot of the resources online quickly go into specific implementation details, but I am instead trying to find a high level explanation of how these deadlocks are dealt with.
(I am assuming the answer has to do with buffering but have found no specific information on the topic.)