I stored a lot of stuff in Redis. One group of them are with namespace cache
(key starts with cache:
). I want to know the size of the data/values with namespace cache
. Can I achieve this in Redis? Any suggestions?

- 9,139
- 14
- 59
- 106

- 1,068
- 2
- 11
- 22
3 Answers
You can do this with RedisGears (https://oss.redislabs.com/redisgears/) with a single line:
RG.PYEXECUTE "GB().map(lambda x: int(execute('MEMORY', 'USAGE', x['key']))).aggregate(0, lambda a,x: a+x, lambda a,x: a+x).run('cache:*')"
The first map operation get the size of each key and the aggregate operation sums it. the argument to the run function is the keys prefix to run on.

- 506
- 2
- 4
You may use scan with memory usage commands. Depending on the size of your database(you may check it with DBSIZE) - you may arrange the count
option of the scan
command. The following command is going to scan the database with matching to the cache:
prefix.
SCAN 0 MATCH cache:* COUNT 2000
Then you may execute MEMORY USAGE
on the individual keys. You can achieve it in your favorite programming language with available redis library.
The lua example could be something like this(i don't have enough experience on lua but it looks like working). It is going to return total size of the values in bytes.
local response = redis.call("SCAN", 0, "MATCH", "cache:*", "count", 2000)
local keys = response[2]
local total = 0
for i = 1, #keys do
total = total + redis.call("MEMORY", "USAGE", keys[i])
end
return total
it may not be the best "performing" solution for large databases. you may need to update your cursor.
Edit: As @for_stack pointed out in the comment, it will not work when the count is less than your total key size when the count is less, it needs to be iterated multiple times.

- 8,816
- 6
- 34
- 48
-
1There's a flaw in the Lua script. In order to get all matched keys, you need to call `SCAN` again and again until the returned cursor is 0. However, you cannot scan all keys Lua script, otherwise Redis will be blocked. So a better solution might be pass cursor as argument to your script. – for_stack Aug 07 '20 at 04:13
-
@for_stack thank you for the comment, you are right (i wrote in the answer that it may work for small number of total keys). – Ersoy Aug 07 '20 at 04:50
nodejs snippet using ioredis:
async function calculateKeysSize(redisClient, matchPattern) {
let iterations = 0;
let totalKeys = 0;
let totalBytes = 0;
let nextCursor;
while (nextCursor !== "0") {
[nextCursor, currCursorKeys] = await redisClient.scan(nextCursor || 0, "match", matchPattern);
totalKeys += currCursorKeys.length;
const pipeline = redisClient.pipeline();
for (const currKey of currCursorKeys) {
pipeline.memory("usage", currKey);
}
const responses = await pipeline.exec();
const sizes = responses.map((response) => response[1]);
totalBytes += sizes.reduce((a, b) => a + b, 0);
if (iterations % 1000 == 0) {
console.log(`scanned ${totalKeys} so far.. total size: ${totalBytes} Bytes`);
}
iterations++;
}
return { totalKeys, totalBytes };
}
await calculateKeysSize(redisClient, "cache:*");

- 3,418
- 1
- 28
- 23