0

Let's assume that it was only writes.
Each "document" inserted is less than 140 characters.

How many writes can this database handle?

Alex
  • 8,471
  • 26
  • 75
  • 99
  • Per second ? As a whole ? Are you sure you're never ever going to read them ? I'd do bench that with some mock data. Where will the data stored ? On an EBS disk ? On the ephemeral disk ? At least, you could probably write more than 10 millions record I guess. Honestly, If you don't plan to read them, you can probably fill up your disk with these documents and the server could still handle them... – Oct Jun 28 '11 at 22:03
  • I've been collecting tweets from live streaming API by PHP+Mongo 2.4.1 on small-type EC2 instance (Ubuntu Server 12.04) for a past week and it's been 32 millions up until here. What I'm decoding is definitely more than 140 characters. (the instance is not EBS optimised) – Maziyar Mar 29 '13 at 13:56

1 Answers1

0

EC2 is notorious for inconsistent throughput. There is no way to answer this question reliably, and even testing this in "production" is going to be problematic because of the varied nature of your platform.

If you want to load-test your application, you need a different platform, and really should be using a hosted (or better, leased) server environment.

With that said, to maximize throughput, use SSD drives, ensure that at least your indexes can remain in memory and that they're useful indexes (though keeping your indexes + db in memory is even more ideal), and shard. (Keep in mind that sharding increases complexity, especially on the backup/recovery front.

gWaldo
  • 11,957
  • 8
  • 42
  • 69