In our project, we are using a single instance of Redis (hosted on GCP) with total memory of 4 GB, out of which only 2 GB is used as of now. The total connection limit is 1000. A few days ago, we noticed an unexpected error (for a few minutes) while reading from Redis cache - "dial tcp xx.xx.xx.xx:6379: socket: too many open files"
Now, I checked that there was no kind of surge in either of the CPU utilisation, memory usage of Redis and neither the redis instance went down. After a few minutes, that error was gone automatically. Although it seems like that this error is referring to the high number of connections opened at the same time. And I checked for the default connection pool size (if any), where I observed in the official docs of the go-redis library (which we're using):
To improve performance, go-redis automatically manages a pool of network connections (sockets). By default, the pool size is 10 connections per every available CPU as reported by runtime.GOMAXPROCS. In most cases, that is more than enough and tweaking it rarely helps.
So, I'm unable to understand what's causing this issue and how to fix it (if it arises again in future)? Can someone please help?