19

I am using websockets to transfer video-y images from a server, written in Go, to a client which is an HTML page. My experience shared below is with Chrome.

I receive images via the onmessage handler of the websocket. On reception of an image, I might need to complete a number of tasks asynchronously before I can display the image. Even if these tasks are not finished, yet another onmessage() may fire. I do not want to queue images since at this point I'm not able to proceed as rapidly as the server is proceeding and because there is no point in displaying old images. I also do not want to drop these images, I don't wan't to receive them at all.

Would the client use a traditional TCP connection, it would just stop reading from the connection. This would cause the receive buffer to be filled, the receive window to be closed and, eventually, pause sending of images at the server. As soon the client starts reading, the receive buffers would empty, the receive window would open and the server resumes transmitting. Every time my server starts to send an image, it picks the freshest one. This pick-the-freshest behaviour, together with the flow control of TCP assures reasonable behaviour in many cases.

Is it possible to have the flow control features of TCP, on which websockets is based, with websockets? I'm especially interested in a solution that relies on TCP's flow control and no application level flow control since this tends to incur unwanted, additional latency.

distributed
  • 366
  • 4
  • 13
  • Why do the tasks need to be asynchronous? If you make them synchronous you've solved the problem. Conversely if they are asynchronous, a TCP connection would have the same problem. – user207421 Oct 16 '13 at 22:14
  • Not all JS APIs have synchronous variants. Also I do not wish my application to become unresponsive just because I would like to throttle reception of images. For me it's not so much about the other APIs but the fact that the web app can't tell the browser to not accept so much data. – distributed Oct 17 '13 at 06:13

3 Answers3

10

It’s now possible to have streams within WebSocket. Chrome 78 will ship with a new WebSocketStream API, which supports backpressure.

Here’s a quote from Chrome Platform Status:

The WebSocket API provides a JavaScript interface to the RFC6455 WebSocket protocol. While it has served well, it is awkward from an ergonomics perspective and is missing the important feature of backpressure. The intent of the WebSocketStream API is to resolve these deficiencies by integrating streams with the WebSocket API.

Currently applying backpressure to received messages is not possible with the WebSocket API. When messages arrive faster than the page can handle them, the render process will either fill up memory buffering those messages, become unresponsive due to 100% CPU usage, or both.

Applying backpressure to sent messages is possible but involves polling the bufferedAmount property which is inefficient and unergonomic.

Unfortunately this is a Chrome-only API and there is no web standard at time of writing.

For further info see:

Simone
  • 20,302
  • 14
  • 79
  • 103
  • 1
    +1 for WebSocketStream. It's Chromium-only at the moment, but on the Chrome team we're working on standardizing it. See this [article](https://web.dev/websocketstream/) for more detail and background. – DenverCoder9 Dec 16 '20 at 11:13
6

I doubt what you are asking for is possible. There is no interface for that functionality in the WebSocket API spec. What the spec does outline, however, is a requirement that the underlying socket connection be managed in the background outside of the script that is using the WebSocket, so that the script is not blocked by WebSocket actions. When the socket receives inbound data, it wraps the data inside of a message and queues it for the WebSocket script to process. There is nothing to block the socket from reading more data while messages remain in the queue waiting for the script to process them.

The only real flow control you can implement in a WebSocket is an explicit one. When a message arrives, send back a message to acknowledge it. Make the server wait to receive that ack before sending its next message.

Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770
  • 2
    As WebRTC, this is really flawed. An attacker could just ignore the packet and send as fast as possible to crash the browser. I do not know why the committee has ignored that. Also flow-control and backpressure should be done by the transport protocol. In user land this is very complex. You should optimize the buffer size according to the network condition. They should have used TCP backpressure and flow-control directly. This way, websocket and WebRTC is basically useless in terms of security and stability. – Kr0e Nov 21 '14 at 09:38
  • 1
    Sorry to update such an old question, but if someone is using Node.js, he can find this npm package useful (https://github.com/baygeldin/ws-streamify). It implements exactly the same approach. – alexb Jun 27 '16 at 16:27
  • If websocket is implemented with Java using the Akka library, the back pressure should be controolled. As far as I know... – Adrian Moisa Dec 13 '18 at 09:19
1

You can do flow-control on WebSocket connections (based on TCP backpressure adaption). Here are two links to get you started:

Disclosure: I am original author of Autobahn and work for Tavendo.

oberstet
  • 21,353
  • 10
  • 64
  • 97
  • 1
    If I understood the linked information correctly, you pause sending on the server when you realize that the client is consuming at a lower rate than the server can produce. I think that with a client in a browser, as per my original post, there is only one way for this situation to arise: the network transports data at a lower rate than the server can produce data. With large bandwidth a client in a browser there seems to be no way to exert backpressure on the server. Do you know a way of exerting client-controlled backpressure?(What would happen naturally if the client read synchronously) – distributed Oct 17 '13 at 08:50
  • and: if you don't consume in your browser JS, and server continues sending, I'd expect TCP backpressure onto server will arise, since neither kernel nor browser WS impl. will buffer incoming data infinitely. not tried myself though .. – oberstet Oct 17 '13 at 09:12
  • 3
    `bufferedamount` works in the opposite direction. In my setup, it would allow the webapp to not overwhelm a server. This is nice, and needed, but not what I'm looking for. My issue lies with the "not consuming" part. I know of no way to tell the browser "I'm not currently consuming". The kernel will not buffer implicitly, this is wanted for backpressure. At least Chrome does implicitly buffer. When I stop the JS Code in the debugger, Chrome continues reading from the WS, no backpressure will build up and Chrome's memory usage will skyrocket. – distributed Oct 17 '13 at 09:49
  • 2
    Yeah, thats an issue. The WS API in browsers needed to have a way for the app to tune lower/upper watermarks controlling how much buffering the browser WS implementation would do for incoming data. If high water mark reached, browser impl. stops reading from socket. TCP pressure builds up. Server stops sending. etc etc. You might want to take this to the IETF Hybi list and/or WHATWG. Or file bugs with browser vendors. Pls leave a note here .. I'd be interested in this also. It's not an issue with WS per-se but the browser API/impl. – oberstet Oct 17 '13 at 10:22
  • Thanks for the confirmation. I would very much appreciate something like water marks or flow pauses on the browser side. I'm quite the greenhorn when it comes to The Web, so I'm not sure I'm getting the whole picture here. The IETF releases RFCs, for example RFC 6455 documenting the Websocket protocol. The W3C releases an API specification for browser vendors to follow. I'm not sure yet what is the right instance to ask here, the IETF, W3C or browser vendors. If browser vendors would introduce fixes, there might be different APIs afterwards. Oh my. I'll leave a note here, sure. – distributed Oct 17 '13 at 12:03
  • 1
    First link is now a 404, if any one can comment with the new URL or edit the post to insert a summary. – Nick Breen Sep 18 '19 at 22:14