1

I am currently using redis cluster with 2 node groups and a replica per node. I chose to use redis because of the high performance. I have a new requirement to have persistent storage of the data in redis. I want to keep the good latency redis gives me and still build some procedure to save the data in the background. Backup built in snapshots is not good enough anymore since there is a maximum of 20 backups per 24 hours. I need data to be synced aprox. every minute The data needs to be stored in a way that restart of the system will not make the data to be lost and that it can be restored back at all times.

So if I summarize the requirements:

  1. Keep working with redis elasticache
  2. Keep highest performance and latency
  3. Be able to have the data persistent (including when the system is down or restarted)
  4. The data sync to happen in intervals of a minute.
  5. Be able to restore data back to redis when it lost the data.

I was looking when googling at manually running BGSAVE from a side docker in EC2 or to have a slave running in another EC2 machine. And then a lambda may take the rdb dile/data and save in s3. Will this fit my needs?

What do the experts suggest? What are your ideas?

devdev
  • 11
  • 2

1 Answers1

0

You can get close to your requirements by enabling AOF persistence. This is done in the cluster's parameter group:

appendonly yes
appendfsync always|everysec

You will have to restart as well. As you can see, redis only has two options for file system sync-for every value and every second. Every value will be quite slow, so go with everysec if you want to keep good performance.

Capaj
  • 4,024
  • 2
  • 43
  • 56