1

I have a scenario where I have to read x number of keys (10000 in my case) from redis, process it before retrieving the next batch and I have to do this for each master node. I am using a sync.Map cursor with client as key and cursor as value to keep track of cursor when coming back again. For smaller number of records for the match, it's working correctly. For 1.2M records, the number of records is too low ~600k. I also tried with mutex but did not solve the issue. My assumption is that before processing a cursor, it's getting overwritten but not sure where and how. Please help with this. Thanks.

err := c.client.ForEachMaster(ctx, func(ctx context.Context, rc *redis.Client) error {
        mu.Lock()
        cursor, ok := cursorMap.Load(rc.String())
        if !ok {
            //initial read
            cursor = uint64(0)
        } else if cursor == 0 {
            // all records are read
            return nil
        }

        keys, retCur, err := rc.Scan(ctx, cursor.(uint64), match, count).Result()
        switch {
        case err == redis.Nil:
            return ErrKeyNotFound
        case err != nil:
            return fmt.Errorf("scan failed: %w", err)
        }
        allKeys = append(allKeys, keys...)
        cursorMap.Store(rc.String(), retCur)
        mu.Unlock()

        return nil
    })

This is how I do cursor check

cursorMap.Range(func(key, cursor any) bool {
    fmt.Println("cursor value received", key, cursor)
    if cursor.(uint64) != 0 {
       allKeysNotRead = true
       return false
    }
    return true }) if allKeysNotRead { // scan again }

0 Answers0