I'm trying to understand if its possible to use edge-triggered epoll and avoid the need to call recv() to read from an epoll triggered READ event multiple times, every single time...
Take this scenario:
- Server sends client say 64 bytes and then closes the socket.
- Client ET epoll_wait triggers a read event on the client. Now lets say the close made it into the trigger ( I have seen this race where the close counts in this READ event).
- Client reads into a buffer of say 4k. Now, to be optimal, I would hope that if you see that recv returns <4k ( the buffer size ) then you know there is no more data and you can get back into the epoll_wait . I think in general this does work, EXCEPT for the close() case. Since closing a socket is signaled by returning 0 bytes from the recv call on the client, it would appear that you HAVE to call recv again to ensure that you don't get a 0 back ( in the general case, you would get -1 with EWOULDBLOCK and continue on your merry way to the next epoll_wait call ).
Given this, it seems like one would always have to call recv twice per read event if you are using edge-triggered epoll... am I missing something here? It seems grossly inefficient