-2

In our project, we are using a single instance of Redis (hosted on GCP) with total memory of 4 GB, out of which only 2 GB is used as of now. The total connection limit is 1000. A few days ago, we noticed an unexpected error (for a few minutes) while reading from Redis cache - "dial tcp xx.xx.xx.xx:6379: socket: too many open files"

Now, I checked that there was no kind of surge in either of the CPU utilisation, memory usage of Redis and neither the redis instance went down. After a few minutes, that error was gone automatically. Although it seems like that this error is referring to the high number of connections opened at the same time. And I checked for the default connection pool size (if any), where I observed in the official docs of the go-redis library (which we're using):

To improve performance, go-redis automatically manages a pool of network connections (sockets). By default, the pool size is 10 connections per every available CPU as reported by runtime.GOMAXPROCS. In most cases, that is more than enough and tweaking it rarely helps.

So, I'm unable to understand what's causing this issue and how to fix it (if it arises again in future)? Can someone please help?

KhiladiBhaiyya
  • 633
  • 1
  • 14
  • 43
  • You need to provide some code to understand what's going. You're most likely having bugs in the management of your connections to redis. You can try to extract minimal working code which connects to redis and exhibits the same problem so that we can pinpoint where the bug is. – davidriod Jul 09 '22 at 14:00

1 Answers1

1

This is not an issue with Redis, it is likely an issue in your code.

Processes in Linux have limits imposed on them, one limit is on the number of 'open file descriptors' a process can have at one time.

A file descriptor is created by a process to enable the process to access the resource and perform operations against it, such as reading/writing to/from it. A file descriptor does not just represent what you think of as a traditional 'file' on disk, it is also used to represent network sockets that a program may read/write from.

In your case, you see: "dial tcp xx.xx.xx.xx:6379: socket: too many open files"

Your program was attempting to open a new network connection to redis, in doing so, it must create a socket, which requires the usage of a file descriptor. The error you get back "too many open files" is due to hitting this limit.

You can do two things

  1. raise this limit, go read about ulimit https://ss64.com/bash/ulimit.html or search your error, many results.
  2. investigate why you had too many open files.

The 2nd piece is likely to show that you are opening files/sockets, and not closing them, causing you to 'leak' descriptors. For example, if each time you query Redis you open a new connection that never is closed, you would eventually run out of file descriptors.

sbrichards
  • 2,169
  • 2
  • 19
  • 32