10

We have inherited a system in which there is a central server for queuing operations. Redis is chosen as queuing agent.

Once in a while (like each 2-3 days) redis service CPU usage goes even upper than 100%.

enter image description here

I tried to read the log to find the cause:

tail /var/log/redis/redis-server.log

But it returns empty result.

enter image description here

I found this article that proposes some stuff. But redis does not answer my commands.

I'm stuck at this point, on what to do and how to find the problem. Also, is there a way to limit the amount of CPU usage of redis?

Saeed Neamati
  • 451
  • 2
  • 8
  • 18
  • are you using a server with multiple cpus? because I think your are using so much memory intend of cpu. – c4f4t0r Apr 05 '16 at 13:16

2 Answers2

4

I was also having problems with redis CPU usage. My case was many redis connections (clients) and linux default ulimit of 1024 open files.

High CPU usage could be caused by some CPU intensive operations like smembers.

You can also use the SLOWLOG redis command to log the long running commands.

The best you can do to trace the problem, when no log and SLOWLOG results, is to strace the redis process and see what is going on. For my case it was 'Too many open files' error, but it was only seen in strace output, which was solved by increasing default nr of files limit over 1024.

0

By default Redis don't have any limitation-usage-resources, so all depends of the power of the server. So I would recommend you configure Redis with limitations:

1- Edit this file: /etc/redis.conf

2- Add the memory limitation (in bytes): maxmemory 536870912

(512mb in my case, but you must use the limitation of ram that you want/need)

3- Also add this rule line: maxmemory-policy volatile-lru

With this rule activated, when the maxmemory is full redis will remove the less-used keys from cache to give space to the new ones.

If you want more info about this, everything is well commented on /etc/redis.conf

neogeo
  • 9
  • 1