I currently am creating a horizontally scalable socket.io server which looks like the following:
LoadBalancer (nginx)
Proxy1 Proxy2 Proxy3 Proxy{N}
BackEnd1 BackEnd2 BackEnd3 BackEnd4 BackEnd{N}
Where the proxies are using sticky session + cluster each with a socket.io server running on a core and being load balanced by the nginx proxy.
Now to my question, these backend nodes use redis pubsub to communicate with the proxies which are handling all the communication via the transport (websockets).
When a request is sent to a backend server by a proxy, it knows the user who requested it, along with the proxy the user is on. My fear is that, when a proxy server goes offline for whatever reason, any pending request on my backend nodes will fail to reach the user when it comes back online because the messages where sent while the server was offline. What can I implement to circumvent this issue and essentially have messages get queued while any proxy server is offline, then delivered when its back on?