5

I currently have a Redis cluster with instance type as cache.t2.micro

There is no option to automatically backup the Redis data to s3 for this instance type, and also one cannot run BGSAVE command as its restricted as described here

I noticed that the data from the nodes are deleted completely if the primary node in the Redis cluster is rebooted, or the Redis engine version is changed from lower to the higher version (i.e. from 3.x to 4.x) although they claim to be the best effort.

Also making snapshots for this instance type is not supported as described here

The only option that I could think of is to use the DUMP command and get the serialized version of the key and archive this data for all the DBS and then restore it back to the new cluster using the RESTORE command. But this probably isn't the best way to do it as it's not scalable and would eventually take some time for larger datasets.

Also for keys with TTL enabled, I would have to run the TTL command, obtain the TTL (which is additional overhead).

But the DUMP command issues the error DUMP payload version or checksum is wrong which rules that option out (No wonder this command wasn't restricted)

Is there any other way to do the backup and restore in this case other than reading all the keys and its value?

Thanks.


EDIT:

So I know this is not the best way to do this, but this is what I could come up with.

TL;DR

Reads all the keys and migrates from one cluster to another.

This shouldn't be problem for clusters with config. greater than t2.*

code:

import traceback
from redis import Redis
import itertools

def migrate_to_another_redis_node(source_node, source_node_port_no, dest_node, dest_node_port_no):
'''
migrates the keys from one redis node to the other
:param source_node: source redis node url
:param source_node_port_no: source redis node port number
:param dest_node: destination redis node url
:param dest_node_port_no: destination redis node port number
:return: True/False
'''
    try:
        total_keys_migrated = 0
        for db in range(16):
             source_redis_client = Redis(source_node, source_node_port_no, db=db)
             dest_redis_client = Redis(dest_node, dest_node_port_no, db=db)
             for key in source_redis_client.keys('*'):
                 key_type = source_redis_client.type(key).decode()
                 if key_type == 'string':
                     value = source_redis_client.get(key)
                     dest_redis_client.set(key, value)
                 elif key_type == 'list':
                     values = source_redis_client.lrange(key, 0, -1)
                     dest_redis_client.rpush(key, *values)
                 elif key_type == 'hash':
                     key_value_pairs = source_redis_client.hgetall(key)
                     dest_redis_client.hmset(key, key_value_pairs)
                 elif key_type == 'set':
                     values = source_redis_client.smembers(key)
                     dest_redis_client.sadd(key, *values)
                 elif key_type == 'zset':
                     values = list(itertools.chain(*source_redis_client.zrange(key, 0, -1, withscores=True)))
                     dest_redis_client.zadd(key, *values)
                 ttl = source_redis_client.ttl(key)
                 if ttl:
                     dest_redis_client.expire(key, ttl)
                 total_keys_migrated += 1
        print('total keys migrated is {}'.format(total_keys_migrated))
        return True
   except:
        error = traceback.format_exc()
        print(error)
   return False

Above works irrespective of key type.
Performance: The above migrated around ~4000 keys in 2 secs.

Adarsh
  • 3,273
  • 3
  • 20
  • 44

1 Answers1

0

According to AWS documentation:

For Redis (cluster mode disabled) clusters, backup and restore aren't supported on cache.t1.micro nodes. All other cache node types are supported.

So for cahce.t2 node you can create manual/final snapshot and changes the node type when restore from snapshot

Libu Mathew
  • 2,976
  • 23
  • 30
Gitty
  • 11
  • 6