1

Consider a front-facing app where every request shares the same Redis Connection, which I believe is the recommended way (?).

In this situation I believe I'm seeing some weird watch multi/exec behavior. Specifically, I would expect one of two transactions to fail because of optimistic locking failure (i.e.: the watch guard) but both seem to go through without throwing a tantrum, but result in the wrong final value.

To illustrate see the below contrived scenario. It's in Node, but I believe it's a general thing. This runs 2 processes in parallel which both update a counter. (It basically implements the canonical example of Watch as seen in the Redis Docs.

The expected result is that the first process results in an increment of 1 while the second fails to update and returns null. Instead, the result is that both processes update the counter with 1. However one is based on a stale counter so in the end the counter is incremented with 1 instead of 2.

    //NOTE: db is a promisified version of node-redis, but that really doesn't matter
    var db = Source.app.repos.redis._raw;
    Promise.all(_.reduce([1, 2], function(arr, val) {
        db.watch("incr");
        var p = Promise.resolve()
            .then(function() {
                return db.get("incr");
            })
            .then(function(val) { //say 'val' returns '4' for both processes.
                console.log(val);
                val++;
                db.multi();
                db.set("incr", val);
                return db.exec();
            })
            .then(function(resultShouldBeNullAtLeastOnce) {
                console.log(resultShouldBeNullAtLeastOnce);
                return; //explict end
            });
        arr.push(p);
        return arr;
    }, [])).then(function() {
        console.log("done all");
        next(undefined);
    })

The resulting interleaving is seen when tailing Redis' MONITOR command:

    1414491001.635833 [0 127.0.0.1:60979] "watch" "incr"
    1414491001.635936 [0 127.0.0.1:60979] "watch" "incr"
    1414491001.636225 [0 127.0.0.1:60979] "get" "incr"
    1414491001.636242 [0 127.0.0.1:60979] "get" "incr"
    1414491001.636533 [0 127.0.0.1:60979] "multi"
    1414491001.636723 [0 127.0.0.1:60979] "set" "incr" "5"
    1414491001.636737 [0 127.0.0.1:60979] "exec"
    1414491001.639660 [0 127.0.0.1:60979] "multi"
    1414491001.639691 [0 127.0.0.1:60979] "set" "incr" "5"
    1414491001.639704 [0 127.0.0.1:60979] "exec"

Is this expected behavior? Would using multiple redis connections circumvent this issue?

Geert-Jan
  • 18,623
  • 16
  • 75
  • 137
  • What is the "wrong final value" that you're getting? 5 or 10? – Itamar Haber Oct 28 '14 at 12:10
  • The value of `incr` was `4` and after both processes have incremented it the value is `5`. This value is expected, but the second transaction should fail, because the value for `incr` has changed from `4` to `5` and therefore the `watch`-guard on the second transaction should fail. This does not happen – Geert-Jan Oct 28 '14 at 13:27
  • @itamarHaber, probably this is just the way it is. Using multiple Redis Connections, correctly results in the watch-guard failing. – Geert-Jan Oct 28 '14 at 13:35
  • dublicate of http://stackoverflow.com/questions/15776955/redis-watch-multi-exec-by-one-client/20186334#20186334 – Nick Bondarenko Oct 29 '14 at 07:33

2 Answers2

2

To answer my own question:

This is expected behavior. The first exec unwatches all properties. Therefore, the second multi/exec goes through without watch-guard.

It's in the docs, but it's fairly hidden.

Solution: use multiple connections, in spite of some answers on SO explicitly warning against this, since it (quote) 'shouldn't be needed'. In this situation IT IS needed.

Geert-Jan
  • 18,623
  • 16
  • 75
  • 137
0

Too late but for anyone reading this in the future, the solution suggested by Geert is not advised by Redis.

One request per connection Many databases use the concept of REST as a primary interface—send a plain old HTTP request to an endpoint with arguments encoded as POST. The database grabs the information and returns it as a response with a status code and closes the connection. Redis should be used differently—the connection should be persistent and you should make requests as needed to a long-lived connection. However, well-meaning developers sometimes create a connection, run a command, and close the connection. While opening and closing connections per command will technically work, it’s far from optimal and needlessly cuts into the performance of Redis as a whole.
Using the OSS Cluster API, the connection to the nodes are maintained by the client as needed, so you’ll have multiple connections open to different nodes at any given time. With Redis Enterprise, the connection is actually to a proxy, which takes care of the complexity of connections at the cluster level. TL;DR: Redis connections are designed to stay open across countless operations. Best-practice alternative: Keep your connections open over multiple commands.

A better solution to tackle this solution is to use lua scripts and make your set of operations blocking and atomic. EVAL to run redis scripts

  • 1
    Sometimes EVAL isn't enough. For example, using redis as a cache with a database: you need to WATCH a key, then read from the database, then SET data in redis as a MULTI transaction. Otherwise a racing client might update the source data, and update redis first, meaning without a transaction, one or the other will end up writing stale data to the cache. Any lua code run with EVAL will be executed all at once, with no chance to read from the source database with safety that no other client is doing the same. – JeremyTM Oct 26 '21 at 00:04
  • 1
    To address the issue of opening a connection, running a command, then closing the connection. What we do is allow our redis client to open a pool of connections over time up to a "preferred" amount, and any time some code needs to run a MULTI transaction, it gets a connection from the pool. If there are no connections available, it can burst temporary connections up to a max limit, or it will wait for the next available connection. Most of the time however, there is a pool connection readily available. – JeremyTM Oct 26 '21 at 00:12
  • Thanks for bringing this up, agreed that EVAL won't be enough for cases like the one @JeremyTM mentioned – Shubham Sharma Oct 26 '21 at 13:56