I'm concerned of what happens when a connection to the client becomes
inactive, for example due to network connection breaking in a way that
doesn't send a TCP RST or FIN.
If the connection is lost in this way (perhaps by the client system being switched off or physically disconnected) then TCP at the server will detect the broken connection because it will receive no acknowledgements of sent data. It can take a couple of minutes for TCP to give up, but in this case that doesn't sound like a big problem.
The worst-case scenario is when the client system remains connected but the client process ceases to read data from the connection. In that case sent data will accumulate at the client until the client's socket receive buffer fills, and then sent data will accumulate at the server -- first in the in-kernel socket send buffer, and then in server process memory.
I'm somewhat surprised that the send() method is not called with the
await keyword in an async method.
ws
predates async/await and promises by years. I imagine that the API will eventually be retrofitted, but it hasn't happened yet.
Does the send() method just queue
all data to be sent? What if the socket buffers become full, can
send() block in a way that causes starvation of other clients than the
blocked one?
WebSocket.send
ends up calling the built-in Net
module's Socket.write
. (See the sendFrame
function at bottom of https://github.com/websockets/ws/blob/master/lib/sender.js for that call, and see https://nodejs.org/docs/latest-v8.x/api/net.html#net_class_net_socket for documentation of the Socket
class.)
Socket.write
will buffer data in the user process if the kernel can not immediately accept the data. Data is buffered separately per-Socket
, so typically this buffering will not affect transmission on other Socket
s connected to other clients. However, there's no bound on the amount of data one Socket
will buffer. In the extreme case one Socket
's buffered data could consume all of the server process's memory, and the resulting server crash would interfere with data delivery to all clients.
There are several ways to avoid this problem. Two easy methods that spring to mind are:
provide a completion callback argument to the send
call. That callback will be passed on to the Socket.write
call, which will fire the callback when all of that write
's data has been written into the kernel. If your server refrains from sending more data to this client until the callback fires, the amount of data buffered in user space for that connection will be limited to something close to the size of the most recent send
. (It won't be precisely that size because the buffered data will include WebSocket framing, plus SSL framing and padding if your connection is encrypted, on top of the original data passed to send
.) Or
examine the bufferSize
property of the connection's Socket
before preparing to send
data on that connection. bufferSize
indicates the amount of data that is currently buffered in user space for that Socket. If it's non-zero, skip the send
for that client.