Provided that both the client subscribed and the server publishing the message retain the connection, is Redis guaranteed to always deliver the published message to the subscribed client eventually, even under situations where the client and/or server are massively stressed? Or should I plan for the possibility that Redis might ocasionally drop messages as things get "hot"?
-
There are probably hardware restriction provisos here as well- any queue system is going to get into trouble if you start running out of disk space, for example. – glenatron May 15 '14 at 10:23
-
1There is always a limit you can push things to, beyond which any system will start to fail. You have to plan ahead for the amount of load which you will receive, and the provision enough resources for it (in this case, high hardware specs, and maybe a cluster of redis machines to share the load) – Munim May 15 '14 at 12:54
-
I have to add: You can only know this limit and plan for capacity by doing thorough load testing yourself. – Munim May 15 '14 at 12:54
2 Answers
Redis does absolutely not provide any guaranteed delivery for the publish-and-subscribe traffic. This mechanism is only based on sockets and event loops, there is no queue involved (even in memory). If a subscriber is not listening while a publication occurs, the event will be lost for this subscriber.
It is possible to implement some guaranteed delivery mechanisms on top of Redis, but not with the publish-and-subscribe API. The list data type in Redis can be used as a queue, and as the the foundation of more advanced queuing systems, but it does not provide multicast capabilities (so no publish-and-subscribe).
AFAIK, there is no obvious way to easily implement publish-and-subscribe and guaranteed delivery at the same time with Redis.

- 70,911
- 12
- 189
- 154
-
3Hello Didier, one pattern is to use both mechanisms together. Publishes push stuff into a list. Workers take things from the queue (the list) which is persistent (depending on the persistence configuration) and PUBLISH the item, removing it only if enough acknowledgement were received (ACKs can be received with Pub/Sub itself), otherwise the item is re-queued for later delivery. Usually you handle the item with RPOPLPUSH / BRPOPLPUSH so to move item being processed in a temp queue. – antirez May 15 '14 at 15:12
-
1I understand that it's impossible to offer a 100% guarantee, but provided a scenario where all other conditions are perfect (subscriber is listening, publisher publishes the message correctly, redis is up and running just fine, the machine is not resource constrained, etc), and isolating the possible causes for messages drop to the redis software itself, is there a possibility that redis itself, in situations of high stress (e.g. thousands of publishes/second) may ocasionally drop messages that should have otherwise gone through? That's basically what I'm looking to understand /cc @antirez – Mahn May 16 '14 at 14:08
-
2Assuming no TCP connection is lost, I would say data will not be silently discarded. If you publish too much traffic to Redis, and/or this traffic should be sent to too many subscribers, it will slow down the Redis event loop. The consequence will be an accumulation of pending data in the input socket buffers. When they are full, your publication clients will be slowed down (due to TCP control flow). I do not anticipate a real data loss in that case. – Didier Spezia May 16 '14 at 15:58
-
@Didier Spezia, I would like to further clarify your point that " When they are full, your publication clients will be slowed down (due to TCP control flow). I do not anticipate a real data loss in that case.", I thought that Redis publish subscribe used a fire and forget mechanism. Please correct my possible misunderstanding. Thank you. – Frank Feb 01 '18 at 08:40
-
Fire and forget does not mean it can bypass TCP control flow. When the socket buffer is full, write operations on the corresponding file descriptors will be blocked. – Didier Spezia Feb 24 '18 at 09:49
-
@Didier Spezia, The UDP vs. TCP difference is tiny. UDP is not worth the overhead of the extra code to detect missing data and re-transmit, so TCP is best. TCP throughput equation indicates that a single TCP connection cannot achieve the transfer rate you want. WE need 2 parallel TCP connections, each transferring different data from the other, to increase the transfer rate. A Redis Pub/Sub queue will not work because when you attach 3 subscriber connections to the queue, they'll all get the same messages rather than different ones. Can we replace Pub/Sub queue with an alternative approach? – Frank Mar 05 '18 at 07:31
Redis does not provide guaranteed delivery using its Pub/Sub mechanism. Moreover, if a subscriber is not actively listening on a channel, it will not receive messages that would have been published.
I previously wrote a detailed article that describes how one can use Redis lists in combination with BLPOP
to implement reliable multicast pub/sub delivery:
http://blog.radiant3.ca/2013/01/03/reliable-delivery-message-queues-with-redis/
For the record, here's the high-level strategy:
- When each consumer starts up and gets ready to consume messages, it registers by adding itself to a Set representing all consumers registered on a queue.
- When a producers publishes a message on a queue, it:
- Saves the content of the message in a Redis key
- Iterates over the set of consumers registered on the queue, and pushes the message ID in a List for each of the registered consumers
- Each consumer continuously looks out for a new entry in its consumer-specific list and when one comes in, removes the entry (using a
BLPOP
operation), handles the message and moves on to the next message.
I have also made a Java implementation of these principles available open-source: https://github.com/davidmarquis/redisq
These principles have been used to process about 1,000 messages per second from a single Redis instance and two instances of the consumer application, each instance consuming messages with 5 threads.

- 2,602
- 1
- 15
- 15
-
What happens in terms of Redis memory consumption when publisher continues publishing messages, but one consumer stops consume them for some reason? Does Redis continue accumulate messages? – odiszapc Mar 18 '17 at 13:17
-
Yep, messages are kept in memory in Redis and it acts as a buffer until at least one consumer comes back "online" and starts consuming messages – David M. Mar 19 '17 at 13:41
-
In case, there are two consumers for one producer and let's say one consumer is running very slow compared to the other consumer then, the list for slow consumer will keep on increasing and at one point of time Redis will either drop elements from the list of slow consumer or will go out of memory. Is this something that we should be worrying about. We are exploring Reddis and need some answers like this to understand corner cases for Reddis – Saloni Vithalani Dec 15 '17 at 11:07
-
1@SaloniVithalani these are valid concerns... Redis is not a message queue per-se... It can be used as a queue for simple needs but it appears if your concerns are valid in your environment, you might be better off with an actual message queue (i.e. RabbitMQ, ZeroMQ, etc...) – David M. Dec 16 '17 at 15:43
-
@David M., Other than switching to RabbitMQ or ZeroMQ , could we substantially reduce lost Redis messages under heavy load with batching? Thank you. – Frank Jan 24 '18 at 11:24
-
@Frank in my experience a queue backed by Redis and implemented with RedisQ (see above) has seen sustained high load (heavy is relative I guess) without losing any messages. Not sure if batching would help your case, but anything is worth trying! – David M. Jan 25 '18 at 12:23
-
@David M., Thank you for your reply. Please describe how to implement a queue backed by Redis. Why does backing a queue with Redis prevent losing any messages? – Frank Jan 25 '18 at 14:23
-
@David M., Your article states that "The solution also does not guarantee that messages will be consumed in the order they were produced. If you have a single consumer instance you’re covered, but as soon as you have multiple consumer instances you cannot guarantee the ordering of messages. Maintaining a lock in a specific key for each consumer would enable this, at the cost of scalability (only 1 message can be consumed at any time throughout your consumer instances)." . As of today, have you solved this problem yet? Thanks. – Frank Jan 25 '18 at 14:37
-
-
@Frank haven't solved this because it hasn't been a problem for my use cases. At any point if you do have complex requirements with regards to your message queue, you really should consider using an actual MQ rather than Redis. Redis provides an easy way to get started by it has its drawbacks. – David M. Jan 26 '18 at 15:24
-
@David M., There is a new general purpose data structure in Redis called Streams described in http://antirez.com/news/114. Could Redis streams be applied to solve the problem you described in your very nice January 2013 article that maintaining a lock or mutex in a specific key for each consumer would enable that Redis messages will be consumed in the order which they were produced given a sustained high load , at the cost of scalability and dropped messages because Redis shuts down when overwhlemed? If this is possible, how would you recommend applying Redis streams to solve this use case? – Frank Jan 27 '18 at 19:08
-
@David M., What if we remove the Redis publish and subscribe connections between the TCP packet producer and the 3 multiple consumers and persist the 4 million TCP packets per second to disk or RAM cache? Thank you. – Frank Jan 29 '18 at 05:10
-
@David M., The UDP vs. TCP difference is tiny. UDP is not worth the overhead of the extra code to detect missing data and re-transmit, so TCP is best. TCP throughput equation indicates that a single TCP connection cannot achieve the transfer rate you want. WE need 2 parallel TCP connections, each transferring different data from the other, to increase the transfer rate. A Redis Pub/Sub queue will not work because when you attach 3 subscriber connections to the queue, they'll all get the same messages rather than different ones. Can we replace Pub/Sub queue with an alternative approach? – Frank Mar 05 '18 at 10:40