7

I am using AWS Redis for a project and ran into an Out of Memory (OOM) issue. In investigating the issue, I discovered a couple parameters that affect the amount of usable memory, but the math doesn't seem to work out for my case. Am I missing any variables?

I'm using:

  • 3 shards, 3 nodes per shard
  • cache.t2.micro instance type
  • default.redis4.0.cluster.on cache parameter group

The ElastiCache website says cache.t2.micro has 0.555 GiB = 0.555 * 2^30 B = 595,926,712 B memory.

default.redis4.0.cluster.on parameter group has maxmemory = 581,959,680 (just under the instance memory) and reserved-memory-percent = 25%. 581,959,680 B * 0.75 = 436,469,760 B available.

Now, looking at the BytesUsedForCache metric in CloudWatch when I ran out of memory, I see nodes around 457M, 437M, 397M, 393M bytes. It shouldn't be possible for a node to be above the 436M bytes calculated above!

What am I missing; Is there something else that determines how much memory is usable?

Matthew Woo
  • 1,288
  • 15
  • 28
  • Out of interest, what happens when it runs out memory? Can you just not add more to cache of is the faillure more spectacular than that? – matt freake Nov 21 '19 at 13:58
  • 1
    @matt freake I believe the specific error that I received was "lpush failed." I can't remember the exact wording, but it wasn't obvious that I was OOM. I'm guessing that it would have worked to add to a different key that wasn't hashed to the particular node that was OOM, but I can't verify. – Matthew Woo Nov 22 '19 at 18:31

1 Answers1

0

I remember reading it somewhere but I can not find it right now. I believe BytesUsedForCache is a sum of RAM and SWAP used by Redis to store data/buffers.

As Elasticache's docs suggest that SWAP should not go higher than 300 MB. I would suggest checking the SWAP metric at that time.

tedd_selene
  • 395
  • 1
  • 4
  • 9