Consider a front-facing app where every request shares the same Redis Connection, which I believe is the recommended way (?).
In this situation I believe I'm seeing some weird watch multi/exec
behavior. Specifically, I would expect one of two transactions to fail because of optimistic locking failure (i.e.: the watch
guard) but both seem to go through without throwing a tantrum, but result in the wrong final value.
To illustrate see the below contrived scenario. It's in Node, but I believe it's a general thing. This runs 2 processes in parallel which both update a counter. (It basically implements the canonical example of Watch as seen in the Redis Docs.
The expected result is that the first process results in an increment of 1 while the second fails to update and returns null
. Instead, the result is that both processes update the counter with 1. However one is based on a stale counter so in the end the counter is incremented with 1 instead of 2.
//NOTE: db is a promisified version of node-redis, but that really doesn't matter
var db = Source.app.repos.redis._raw;
Promise.all(_.reduce([1, 2], function(arr, val) {
db.watch("incr");
var p = Promise.resolve()
.then(function() {
return db.get("incr");
})
.then(function(val) { //say 'val' returns '4' for both processes.
console.log(val);
val++;
db.multi();
db.set("incr", val);
return db.exec();
})
.then(function(resultShouldBeNullAtLeastOnce) {
console.log(resultShouldBeNullAtLeastOnce);
return; //explict end
});
arr.push(p);
return arr;
}, [])).then(function() {
console.log("done all");
next(undefined);
})
The resulting interleaving is seen when tailing Redis' MONITOR command:
1414491001.635833 [0 127.0.0.1:60979] "watch" "incr"
1414491001.635936 [0 127.0.0.1:60979] "watch" "incr"
1414491001.636225 [0 127.0.0.1:60979] "get" "incr"
1414491001.636242 [0 127.0.0.1:60979] "get" "incr"
1414491001.636533 [0 127.0.0.1:60979] "multi"
1414491001.636723 [0 127.0.0.1:60979] "set" "incr" "5"
1414491001.636737 [0 127.0.0.1:60979] "exec"
1414491001.639660 [0 127.0.0.1:60979] "multi"
1414491001.639691 [0 127.0.0.1:60979] "set" "incr" "5"
1414491001.639704 [0 127.0.0.1:60979] "exec"
Is this expected behavior? Would using multiple redis connections circumvent this issue?