I am using websockets to transfer video-y images from a server, written in Go, to a client which is an HTML page. My experience shared below is with Chrome.
I receive images via the onmessage handler of the websocket. On reception of an image, I might need to complete a number of tasks asynchronously before I can display the image. Even if these tasks are not finished, yet another onmessage() may fire. I do not want to queue images since at this point I'm not able to proceed as rapidly as the server is proceeding and because there is no point in displaying old images. I also do not want to drop these images, I don't wan't to receive them at all.
Would the client use a traditional TCP connection, it would just stop reading from the connection. This would cause the receive buffer to be filled, the receive window to be closed and, eventually, pause sending of images at the server. As soon the client starts reading, the receive buffers would empty, the receive window would open and the server resumes transmitting. Every time my server starts to send an image, it picks the freshest one. This pick-the-freshest behaviour, together with the flow control of TCP assures reasonable behaviour in many cases.
Is it possible to have the flow control features of TCP, on which websockets is based, with websockets? I'm especially interested in a solution that relies on TCP's flow control and no application level flow control since this tends to incur unwanted, additional latency.