4

I am using the ws Node.js package to create a simple WebSocket client connection to a server that is sending hundreds of messages per second. Even with a simple onMessage handler that just console.logs incoming messages, the client cannot keep up. My understanding is that this is referred to as backpressure, and incoming messages may start piling up in a network buffer on the client side, or the server may throttle the connection or disconnect all-together.

How can I monitor backpressure, or the network buffer from the client side? I've found several articles speaking about this issue from the perspective of the server, but I have no control over the server and need to know just how slow is my client?

oliakaoil
  • 1,615
  • 2
  • 15
  • 36
  • What do you mean by 'client can not keep up'? Does it disconnect? Falls behind? Does capturing a performance snapshot(via DevTools) help? – not-a-robot Jun 03 '20 at 12:00
  • see [SO WS backpressure](https://stackoverflow.com/questions/19414277/can-i-have-flow-control-on-my-websockets) and [this ReadMe](https://github.com/baygeldin/ws-streamify) – not-a-robot Jun 03 '20 at 12:13
  • @not-a-robot I mean that the application I'm running on the client machine cannot process messages at the same rate that the client is receiving messages. I believe the messages are being queued by a network layer buffer and eventually dropped. – oliakaoil Jun 03 '20 at 16:28
  • Is it imperative that the client get EVERY message? Or is it possible to let messages be dropped without causing a problem? – James F Jun 03 '20 at 19:20
  • @oliakaoil If the client can not keep up, the JS thread would become very busy and your website would become unresponsive. Also your browsers memory usage would skyrocket because of all the buffering. Did you have a look at the network inspector tab in Chrome's Dev Tools? You can watch live socket traffic there. – not-a-robot Jun 04 '20 at 08:32
  • @not-a-robot I'm not building a website, I'm running Node.js on the command-line with Debian Linux. I'm hoping there is a way to access information about the network buffer or the underlying socket using `ss`, `netstat`, something in `/proc` or some other tool, since there doesn't seem to be anything relevant available via the WebSocket interface. – oliakaoil Jun 04 '20 at 15:16
  • @JamesF unfortunately I cannot ignore/discard messages – oliakaoil Jun 04 '20 at 15:16
  • It seems `netstat` can provide some of this information. The first answer to [this SE question](https://unix.stackexchange.com/questions/428744/how-to-figure-out-the-meaning-behind-recv-q-and-send-q-from-netstat) states that the Receive Queue is data received by the kernel, but not yet accepted by the process. I can therefore use [node-netstat](https://github.com/danielkrainas/node-netstat). – oliakaoil Jun 04 '20 at 18:12

3 Answers3

0

So you don't have control over the server and want to know how slow your client is.(seems like you already have read about backpressure). Then I can only think of using a stress tool like artillery

Check this blog, it might help you setting up a benchmarking scenario.

https://ma.ttias.be/benchmarking-websocket-server-performance-with-artillery/

MikZuit
  • 684
  • 5
  • 17
  • I'm already dealing with a server that is faster than the client, so there is no need to setup another stress test. I added code to my application which tells me how many messages per second my client is processing, and I can use this to know very roughly when the client is reaching it's maximum performance, but this is not very accurate. – oliakaoil Jun 04 '20 at 15:26
0

Add timing metrics to your onMessage function to track how long it takes to process each message. You can also use RUM instrumentation like from the APM providers -- NewRelic or Appdynamics for paid options or you could use free tier of Google Analytics timing.

If you can, include a unique identifier for correlation between the client and server for each message sent.

Then you can correlate for a given window how long a message took to send from the server and how long it spent being processed by the client.

You can't get directly to the network socket buffer associated with your websocket traffic since you're inside the browser sandbox. I checked the WebSocket APIs and there's no properties that expose receive buffer information.

Razi
  • 11
  • 3
  • I'm not inside the browser, I'm running Node.js on the command line. I reached the same conclusion as well, there is nothing in the WebSocket interface which provides this information, so I think it must be pulled from the OS somehow (Debian Linux). Also FYI, I do not have control of the server. – oliakaoil Jun 04 '20 at 15:28
0

If you don't have control over the server, you are limited. But you could try some client tricks to simulate throttling.

This heavily assumes you don't mind skipping messages.

One approach would be to enable the socket, start receiving events and set your own max count in a in-memory queue/array. Once you reach a full queue, turn off the socket. Process enough of the queue, then enable the socket again.

This has high cost to disable/enable the socket, as well as the loss of events, but at least your client will not crash.

Once your client is not crashing, you can put some additional counts on timestamp and the queue size to determine the threshold before the client starts crashing.

Geoffrey
  • 31
  • 3
  • Unfortunately I cannot skip messages – oliakaoil Jun 04 '20 at 15:26
  • 1
    Could you build a new server side that your client connects to? Thus the server would connect via ws to the destination your client currently connect to and that would allow you to optimize the handling to the client browser? – Geoffrey Jun 04 '20 at 17:14
  • I'm not building a website, I'm running Node.js on the command line in Debian Linux. Even with the full resources of the machine available, it can still be too slow. Regardless, my question is still relevant, how can I know if my WS client is keeping up with the WS server? Seems like this would be very helpful in a multitude of contexts. – oliakaoil Jun 04 '20 at 18:06
  • That makes sense. It really is an interesting situation. Personally, I'd dig deep into the WS code and try to recreate some of the underlying functionality such as https://github.com/websockets/ws/blob/c02a4b77c2b257336400c7aed9ca7c222d87c6ff/lib/websocket.js#L173 so that I can sidestep any potential performance issues in the library. A raw connection to the WebSocket should give you more control to at least avoid assumptions in WS lib that could be causing issues for you. Also, at that speed, console.log can be pretty darn slow. I'd suggest writing to an in-mem list and log every 5+ seconds. – Geoffrey Jun 05 '20 at 05:52
  • That may at least give you a way around the "crash/freeze" where you lose control of the process due to overload. – Geoffrey Jun 05 '20 at 05:53
  • I'll take a look, I assumed the ws lib was well-optimized given its popularity, but you never know. Also I found that [disabling the permessage-deflate extension](https://github.com/websockets/ws#websocket-compression) on the client side made a *huge* difference in speeding up the client. – oliakaoil Jun 05 '20 at 16:12
  • great! I agree, I would hope it is optimized, but it does have features you can skip. It may use internal queues and such that you simply don't need. when dealing with 100-400 requests per second, those conditions to disable features can add up. Going super lean to see if the base runtime tech of a request can handle would help narrow down if it's simply node vs a lib issue. Perhaps another language could work, like GO, if you really need it to overcome the speed issue. – Geoffrey Jun 05 '20 at 19:45