What are the Redis capacity requirements to support 50k consumers within one consumer group to consume and process the messages in parallel? Looking for testing an infrastructure for the same scenario and need to understand considerations.
-
50K consumers - you mean 50K servers sharing & consuming the events? – RamPrakash Apr 16 '20 at 14:40
-
In our case, there will be 50k docker containers consuming the messages in parallel from Redis stream. – Sowmyan Soman Apr 16 '20 at 16:46
-
man.. that sounds extraordinary! what is the expected events throughput!? – RamPrakash Apr 16 '20 at 17:26
-
Yes, any idea on the capacity requirement from Redis perspective? I couldnt find any recommendation to support such large scale scenario. – Sowmyan Soman Apr 16 '20 at 17:28
-
I am really sorry I have no experience in that scale. I just wanted to know if it was a typo from your side. – RamPrakash Apr 16 '20 at 17:31
-
Np @RamPrakash. Thanks for your comments. – Sowmyan Soman Apr 16 '20 at 17:32
1 Answers
Disclaimer: I worked in a company which used Redis in a somewhat large scale (probably less consumers than your case, but our consumers were very active), however I wasn't from the infrastructure team, but I was involved in some DevOps tasks.
I don't think you will find an exact number, so I'll try to share some tips and tricks to help you:
Be sure to read the entire Redis Admin page. There's a lot of useful information there. I'll highlight some of the tips from there:
- Assuming you'll set up a Linux host, edit
/etc/sysctl.conf
and set a highnet.core.somaxconn
(RabbitMQ suggests4096
). Check the documentation oftcp-backlog
config in redis.conf for an explanation about this. - Assuming you'll set up a Linux host, edit
/etc/sysctl.conf
and setvm.overcommit_memory = 1
. Read below for a detailed explanation. - Assuming you'll set up a Linux host, edit
/etc/sysctl.conf
and setfs.file-max
. This is very important for your use case. The Open File Handles / File Descriptors Limit is essentially the maximum number of file descriptors (each client represents a file descriptor) the SO can handle. Please check the Redis documentation on this. RabbitMQ documentation also present some useful information about it. - If you edit the
/etc/sysctl.conf
file, runsysctl -p
to reload it. - "Make sure to disable Linux kernel feature transparent huge pages, it will affect greatly both memory usage and latency in a negative way. This is accomplished with the following command:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
." Add this command also to/etc/rc.local
to make it permanent over reboot.
- Assuming you'll set up a Linux host, edit
In my experience Redis is not very resource-hungry, so I believe you won't have issues with CPU. Memory are directly related to how much data you intend to store in it.
- If you set up a server with many cores, consider using more than one Redis Server. Redis is (mostly) single-threaded and will not use all your CPU resources if you use a single instance in a multicore environment.
Redis server also warns about wrong/risky configurations on startup (sorry for the old image):
Explanation on Overcommit Memory (vm.overcommit_memory
)
Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis [from Redis FAQ]
There are three possible settings for vm.overcommit_memory
.
- 0 (zero): Check if enough memory is available and, if so, allow the allocation. If there isn’t enough memory, deny the request and return an error to the application.
- 1 (one): Permit memory allocation in excess of physical RAM plus swap, as defined by
vm.overcommit_ratio
. Thevm.overcommit_ratio
parameter is a percentage added to the amount of RAM when deciding how much the kernel can overcommit. For instance, avm.overcommit_ratio
of 50 and 1 GB of RAM would mean the kernel would permit up to 1.5 GB, plus swap, of memory to be allocated before a request failed. - 2 (two): The kernel’s equivalent of "all bets are off", a setting of 2 tells the kernel to always return success to an application’s request for memory. This is absolutely as weird and scary as it sounds.

- 4,494
- 4
- 36
- 60