2

I am using a PersistentConnection for publishing large amounts of data (many small packages) to the connected clients.

It is basically a one way direction of data (since each client will call endpoints on other servers to set up various subscriptions, so they will not push any data back to the server via the SignalR connection).

Is there any way to detect that the client cannot keep up with the messages sent to it?

The example could be a mobile client on a poor connection (e.g. in a roaming situation, the speed may vary a lot). If we are sending 100 messages per second, but the client can only handle 10, we will eventually lose the messages (due to the message buffer on the server side).

I was looking for a server side event, similar to what has been done on the (SignalR) client, e.g.

protected override Task OnConnectionSlow(IRequest request, string connectionId) {}

but that is not part of the framework (for good reasons, I assume).

I have considered using the approach (suggested elsewhere on Stackoverflow), to let the client tell the server (e.g. every 10-30 seconds) how many messages it has received, and if that number differentiates a lot from the number of messages sent to the client, it is likely that the client cannot keep up.

The event would be used to tell the distributed backend that the client cannot keep up, and then turn down the data generation rate.

cwt237
  • 267
  • 3
  • 9

1 Answers1

3

There's no way to to this right now other than coding something custom. We have discussed this in the past as a potential feature but it isn't anywhere the roadmap right now. It's also not clear what "slow" means as it's up to the application to decide. There'd probably be some kind of bandwidth/time/message based setting that would make this hypothetical event trigger.

If you want to hook in at a really low level, you could use owin middleware to replace the client's underlying stream with one that you owned so that you'd see all of the data going over the write (you'd have to do the same for websockets though and that might be non trivial).

Once you have that, you could write some time based logic that determined if the flush was taking too long and kill the client that way.

That's very fuzzy but it's basically a brain dump of how a feature like this could work.

davidfowl
  • 37,120
  • 7
  • 93
  • 103
  • Thanks for the quick reply - I did not expect you to have a trivial solution at hand, but nice to know that it has been considered and is not planned. I will probably leave it as it is now, and pick it up if we see issues in that direction. – cwt237 Jun 27 '13 at 08:46
  • @davidfowl I have an idea to "throttle" the signalR outgoing functions to Mbits/second. We have 10,000 websockets connected and if I call "Clients.All.func()" that sends 1k to each one, that is 10 megabytes, or a little under 1 second at 100Mbps. I'd like to throttle it to 10Mbps, spreading it out over 10 seconds. Would a Windows QoS GPO work for this? – Brain2000 Aug 17 '18 at 17:55
  • Maybe but that's a hammer. Can you target a specific process that way? – davidfowl Aug 19 '18 at 01:05