0

I know that in order to call WSASend() simultaneously, I need to provide for each call a unique WSAOVERLAPPED and WSABUF instances. But this means that I have to keep track of these instances for each call, which will complicate things.

I think it would be a better idea if I create a thread that only make WSASend() calls not simultaneously but rather sequentially. This thread will wait on a queue that will hold WSASend() requests (each request will contain the socket handle and the string I want to send). When I eventually call WSASend() I will block the thread until I receive a wake up signal from the thread that waits on the completion port telling me that the WSASend() has been completed, and then I go on to fetch the next request.

If this is a good idea, then how should I implement the queue and how to make a blocking fetch call on it (instead of using polling)?

John
  • 1,049
  • 1
  • 14
  • 34
  • The first question is whether the peer can make sense of all this concurrent sending. Can it really? – user207421 Mar 05 '15 at 11:32
  • @EJP Yes, each `WSASend()` call would send a message with a defined length that are not related to each other. – John Mar 05 '15 at 11:35
  • 1
    Sounds like you shoudl just use blocking I/O to me, with a mutex to sequentialize it. Much simpler than what you propose here. – user207421 Mar 05 '15 at 11:39
  • @EJP But I want to handle thousands of clients, so blocking I/O will not work. – John Mar 05 '15 at 11:43
  • If you're going to use sender threads, you can use a synchronization mechanism to lock the queue when queueing/dequeueing. That should be enough. – jweyrich Mar 05 '15 at 11:50
  • Now that I think about it, keeping track of the `WSAOVERLAPPED` and `WSABUF` instances is not such a complicated task compared to creating a sender thread! – John Mar 05 '15 at 14:08
  • @John You *are* using blocking I/O under this proposal, just implementing it yourself on top of async I/O. – user207421 Mar 05 '15 at 23:55
  • @EJP I have no problem in directly using blocking I/O if it allowed me to handle thousands of clients. Unfortunately this is only allowed with overlapped I/O. – John Mar 06 '15 at 00:53
  • Using overlapped I/O and IOCP correctly allows a system to scale to many thousands of concurrent connections. There are an infinite number of ways that you can prevent this ability to scale by doing things wrong. Your example is but one of them... As EJP says, you are simply rolling your own blocking I/O using overlapped I/O. This thread is now blocked until the I/O completes and THAT is what IOCPs and overlapped I/O (when used correctly) avoids... – Len Holgate Mar 06 '15 at 09:16

1 Answers1

1

The WSABUF can be stack based as it is the responsibility of WSASend() to duplicate it before returning. The OVERLAPPED and the data buffer itself must live until the IOCP completion for the operation is extracted and processed.

I've always used an 'extended' OVERLAPPED structure which incorporates the data buffer, the overlapped structure AND the WSABUF. I then use a reference counting system to ensure that the 'per operation data' exists until nobody needs it any more (that is I take a reference before the API call initiates the operation and I release a reference when the operation is completed after removal of the completion from the IOCP - note that references aren't 100% necessary here but they make it easier to then pass the resulting data buffer off to other parts of the code).

It is MOST optimal for a TCP connection to have the TCP "window size" of data in transit at any one time and to have some more data pending so that the window is always kept full and you are always sending at the maximum that the connection can take. To achieve this with overlapped I/O it's usually best to have many WSASend() calls pending. However, you don't want to have too many pending (see here) and the easiest way to achieve this is to track the number of bytes that you have pending, queue bytes for later transmission and send from your transmission queue when existing sends complete...

Len Holgate
  • 21,282
  • 4
  • 45
  • 92