Maybe I am missing something, this seems too simple. Is it possible to make redis durable by having a master redis node duplicate data to a slave redis node?
My situation I have a REST endpoint which upon recieving a request from a client sticks the payload it in a redis queue and then returns a success (HTTP 200) to the client. If that queue goes down before the message is processed and before fsync occured, I've lost that payload and no one knows about it.
I was wondering, if instead I could simply write to two redis queues (in different zones) one the master and one the slave. When I write to the 'master' redis will then automatically write the same element in the slave queue and only then does the endpoint return a HTTP 200 to the client.
Is this possible? Redis would (i) need a way to write to a slave and (ii) have a synchronous API or awaitable API which will only return once there is confirmation the payload has been written to both the master and slave. The key here is that redis allows the caller to know that the slave has received the event.
If the client doesn't get a HTTP 200 they know they should try sending it again. Feel like there are caveats I'm not seeing.
Thanks