I am running a Redis instance with maxmemory
and usage
of around 25GB.
It is being run as Statefulset
in Kubernetes. As the redis-pod can get scheduled to any of the boxes, and can get restarted any time I AOF backup over RDB.
But, yesterday the redis-pod got restarted and it took around 5 min to load the data, which made me think if the RDB backup is better suited if the data is large?
- I know, that the AOF file size can outgrow, and is automatically rewritten to optimize.
- But even in a 100% optimized state, if the DB has to restart, IMO it will take the whole time to call all the commands which will fill up the data back to 25GB, and which would be time-consuming. (around 5 mins, which I monitored)
- I want to reduce this time and thinking of going back to RDB snapshots, though the data will be lost, that's a tradeoff for fast startup time. Will schedule the BGSAVE for a shorter time, to avoid or reduce the data loss.
I wanted to know, if it is preferable to use RDB backup if the data size is large and you want to avoid the slower startup time? And if running BGSAVE every 5 or 10 mins, if the data size is large (above 15GB) is good practice or not?
(also the large data size may not be large enough for others, but this it's about the startup time, I am worried about, and the downtime it brings with that)