I am working on a project that involves client server communication via TCP and Google Protocol Buffer. On the client side, I am basically using NetworkStream.Read() to do blocking read from server via a byte array buffer.
According to MSDN documentation,
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
It is the same case with async read (NetworkStream.BeginRead and EndRead). My question is that when does Read()/EndRead() return? It seems like it will return after all the bytes in the buffer have been filled. But in my own testing, that is not the case. The bytes read in one operation vary a lot. I think it makes sense because if there is a pause on the server side when sending messages, the client should not wait until the read buffer has been filled. Does the Read()/EndRead() inherently have some timeout mechanism?
I was trying to find out how Mono implements Read() in NetworkStream and traced until a extern method Receive_internal() is called.