1

I currently am creating a horizontally scalable socket.io server which looks like the following:

                 LoadBalancer (nginx)

      Proxy1      Proxy2      Proxy3      Proxy{N}

 BackEnd1   BackEnd2   BackEnd3   BackEnd4   BackEnd{N}

Where the proxies are using sticky session + cluster each with a socket.io server running on a core and being load balanced by the nginx proxy.

Now to my question, these backend nodes use redis pubsub to communicate with the proxies which are handling all the communication via the transport (websockets).

When a request is sent to a backend server by a proxy, it knows the user who requested it, along with the proxy the user is on. My fear is that, when a proxy server goes offline for whatever reason, any pending request on my backend nodes will fail to reach the user when it comes back online because the messages where sent while the server was offline. What can I implement to circumvent this issue and essentially have messages get queued while any proxy server is offline, then delivered when its back on?

Steve P
  • 319
  • 3
  • 7

1 Answers1

0

Pubsub doesn't persist messages. At all. In order to use Redis for this you would need to use a queue instead. For example you can use a combination of list operations where the producer pushes them to a list and you client server uses a BLPOP or BRPOP depending on how you add them and whether you want messages in FIFO or LIFO sequence.

The Real Bill
  • 14,884
  • 8
  • 37
  • 39