2

I am wondering how good is Mongodb findAndmodify vs redis increment command in terms of speed. I know Mongodb findAndModify will do a r/w lock on the document but if I have 100 threads trying to simultaneously write was wondering if redis be a more preferrable option.

Raghu Katti
  • 63
  • 1
  • 5
  • 1
    I have not rally used redis but...why don't you just use MongoDBs $inc command instead? – Sammaye Jul 10 '13 at 20:23
  • 1
    Test it (and then blog about it somewhere :) ). There are too many aspects to consider for StackOverflow here. Hardware, developers, administration, maintenance, requirements/needs. – WiredPrairie Jul 10 '13 at 21:02
  • thanks for the response... was wondering if anybody had already done it... I will give it a try :) – Raghu Katti Jul 11 '13 at 14:32

1 Answers1

3

There are many parameters which can alter the result of such comparison.

Mongodb will do a r/w lock at the database level (not the document). Redis is a single-threaded server, and will serialize everything. In term of granularity of the concurrency it will be mostly equivalent. The Redis implementation is more efficient though, because with Mongodb, you will end up with hundreds of threads contending on the same lock.

You also need to consider what happens at protocol level: Mongodb protocol is assymetric, so you have the possibility to push data without even checking if the last operation was successful (i.e. no mandatory ack). Redis protocol is purely client/server, so each command returns a result that the client application will have to read. You can pipeline commands though. At the protocol level, Mongodb can allow you to push data faster than with Redis (considering pure performance, without any command acknowledgment).

It also depends on the persistency options: Mongodb journalization is optional, so is Redis append-only-file configuration. Depending on how each store is configured, you will have vastly different results. Master/slave replication in your MongoDB or Redis cluster will also alter the results ...

It may depends on other environmental factors such as the compiler you used to compile MongoDB or Redis, the kernel version, etc ...

That's why you should run your own benchmark in your own environment.

Running quick and dirty benchmarks is easy (but not that much representative, so results must be taken with a grain of salt).

With Mongodb, from the mongo shell:

 > db.toto.save( {_id:1, val:0 } )
 > ops = [ { op: "update", ns:db.toto, query:{_id:1}, update:{ $inc : { val:1 } } } ] ;
 > res = benchRun( { parallel: number_of_connections, seconds: 20, ops:ops, host:"localhost:7380" } );

With Redis:

 $ redis-benchmark -q -n 100000 -t incr -c number_of_connections -P pipelining_factor

Here are some figures I have just collected on my box:

MongoDB   1 connection                    64613 updates/s
MongoDB  50 connections                   53825 updates/s
Redis     1 connection   no pipelining    29437 updates/s
Redis    50 connections  no pipelining   101626 updates/s
Redis    50 connections  pipelining=50   442477 updates/s

We can see that MongoDB is extremely efficient with one connection due to the assymetric protocol, but this efficiency decreases with the number of connections due to the r/w lock. Redis, with no pipelining and one connection, is seriously slowed down by its client/server protocol. However, if the workload is spread over more connections, or if pipelining is used, the cost of waiting for acknowledgments is amortized, and Redis can achieve more throughput than MongoDB (on this particular $0.02 benchmark).

Didier Spezia
  • 70,911
  • 12
  • 189
  • 154