There is a Unit Test program (C++ on Solaris) I came across in my company.
It tests whether an error is thrown when the client tries to send a msg to the server across a TCP connection that has been closed by the server.
Scenario:
Client Server
|<----------Connected----------->|
| |
| 2 sec pause |
| |
|<-----shutdown(SHUT_RDWR)-------|
|<-----------close---------------|
| |
| 2 sec pause |
| |
|-------------send-------------->|
|-------------send-------------->|
General behaviour: The 1st send()
returns the no. of bytes written to the socket. The 2nd send()
fails, i.e. returns -1. This is the expected behaviour.
Inconsistent behaviour: Both the send()
pass, making the test case to fail.
Note: There is no delay between the two send()
.
The behaviour is inconsistent on SunOS 5.10 (64-bit)
Question 1: Can anyone please explain the "General Behaviour" described above? Why did the 1st send()
not fail? Why did the 2nd send()
fail?
I know very little about TCP so did a bit of research and guessed that maybe after close()
the TCP stack at the server side is in FIN-WAIT-2
state, and when it receives the 1st msg after close()
(via the 1st send()
) it responds with an ACK
, and maybe in this ACK
it informs the client to stop sending any more msgs?
Question 2: Regarding "Inconsistent behaviour" - the 2nd send()
does not fail sometimes - why?
Is it because the 2nd msg is sent before the ACK
from 1st msg is received? (this explanation depends of course on the assumption of 1st question)
Note: The Client and Server are on the same machine and snoop did not help.