2

We are trying to make a "real time" statistics part for our application, and we want to use MongoDB.

So, to do this, I basically imagine a DB named storage. In this db, I create a statistics collection.

And I store my data like this :

{
    "_id" : ObjectId("55642d270528055b171fedf5"),
    "cat" : "module",
    "name" : "Injector",
    "ts_min" : ISODate("2015-05-22T13:16:00Z"),
    "nb_action" : {
        "0" : 156
    },
    "tps_action" : {
        "0" : 45016
    },
    "min_tps" : 10,
    "max_tps" : 879
}

So, I have a category, a name and a date to determine an unique Object. In this object, I store :

  • number of used per second (nb_action.[0..59])
  • Total time per second (tps_action.[0..59])
  • Min time
  • Max time

Now, to inject my data I use an Upsert method:

db.statistics.update({ 
 ts_min: ISODate("2015-05-22T13:16:00.000Z"),
 name: "Injector",
 cat: "module"
},
{
  $inc: {"nb_action.0":1, "tps_action.0":250},
  $min: {min_tps:250},
  $max: {max_tps:250}
},
{ upsert: true })

So, I perform 2 $inc to manage my counter and used $min and $max to manage my stats.

All of this works...

With 1 thread injecting 50.000 data on one single machine (no shard) (for 10 modules), I observe 3.000/3.500 ops per second.

And my problem is.... I can't say if it's good or not.

Any suggestions?

PS: I use long name field for the example and add a set part for initialize each second in case of insert

garryp
  • 5,508
  • 1
  • 29
  • 41

0 Answers0