0

I am stuck with a problem regarding MongoDB sharding. My test setup is given below-

1.Application Server(1 server) -Where my application is running.

2.MongoS & Router(1 server)

3.Two Shards-Primary shard contains the complete DB & Secondary shard is just blank.

There is a collection called "DEMO" which has the following data given below-

"_id" : ObjectId("541c2df0e4b06af824c2c046"),
    "country" : "INDIA",

    "deviceType" : "manu-laptop",
    "osVersion" : "patanahi",

     "logtime" : {
            "logtime" : ISODate("2014-09-19T13:21:52.596Z"),
            "logtimeStr" : "19-09-2014 06:51:52",
            "day" : 19,
            "month" : 9,
            "year" : 2014,
            "hour" : 18,
            "min" : 51,
            "second" : 52
    },

    "countryId" : "511d0f28c3c4e5cc447c8dac"

There are two countries -INDIA & CHINA. I have sharded the country key. The command I used to shard the key is

db.runCommand({shardcollection:"demo.db",key:{"country" : 1}});

But when I run the load on Mongos it only keeps the data on primary shard and not routes it to the second shard.

The use case is as follows- I want to keep INDIA data on 1 shard and CHINA data on the other shard. Please help.

The setup is complete and is working perfectly fine.

      --- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 3,
        "minCompatibleVersion" : 3,
        "currentVersion" : 4,
        "clusterId" : ObjectId("541bf31a8c554f2e2d4e1ad4")
}
  shards:
        {  "_id" : "shard0000",  "host" : "xx.xx.xx.xx:27017",  "tags" : [     "INDIA" ] }
        {  "_id" : "shard0001",  "host" : "xx.xx.xx.xx:27017",  "tags" : [    "USA" ] }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "demo",  "partitioned" : true,  "primary" : "shard0000" }
                demo.device
                        shard key: { "country" : 1 }
                        chunks:
                                shard0001       1
                                shard0000       1
                        { "country" : { "$minKey" : 1 } } -->> { "country" : "INDIA" } on : shard0001 Timestamp(2, 0)
                        { "country" : "INDIA" } -->> { "country" : { "$maxKey" : 1 } } on : shard0000 Timestamp(2, 1)
                demo.incoming_request_log
                        shard key: { "regionId" : 1 }
                        chunks:
                                shard0001       2
                                shard0000       3
                        { "regionId" : { "$minKey" : 1 } } -->> { "regionId" : 0 } on : shard0001 Timestamp(2, 0)
                        { "regionId" : 0 } -->> { "regionId" : 2 } on : shard0000 Timestamp(3, 1)
                        { "regionId" : 2 } -->> { "regionId" : "0" } on : shard0000 Timestamp(2, 2)
                        { "regionId" : "0" } -->> { "regionId" : "2" } on : shard0000 Timestamp(2, 4)
                        { "regionId" : "2" } -->> { "regionId" : { "$maxKey" : 1 } } on : shard0001 Timestamp(3, 0)
                         tag: INDIA  { "regionId" : "0" } -->> { "regionId" : "1" }
                         tag: USA  { "regionId" : "2" } -->> { "regionId" : "3" }
        {  "_id" : "demo;",  "partitioned" : false,  "primary" : "shard0001" }
Community
  • 1
  • 1
ratr
  • 606
  • 1
  • 9
  • 24
  • Yo should look at tag based sharding: http://docs.mongodb.org/manual/core/tag-aware-sharding/ – Lalit Agarwal Sep 19 '14 at 13:51
  • please post sh.status() - I suspect you only have a single chunk and it has not been split yet. It shoud also be noted that using a field with just 2 values is a poor choice for a shard key, you can only ever have 2 chunks (what if you need 4 shards to handle traffic?), and will likely end up exceeding the max chunk size too, which has its own implications. If you just want to split queries/write to 2 different sets of data on different servers, Why not just have 2 databases instead? – Adam Comerford Sep 19 '14 at 14:18
  • Adam Comerford-Its just for testing purpose.Production Implementation would be different.For testing I just need to route traffic as per the country to shards assigned. – ratr Sep 19 '14 at 15:06
  • Thanks lalit-agarwal!tag-aware-sharding worked for me. – ratr Sep 19 '14 at 19:19

2 Answers2

1

This is what I did to solve the problem-

1.I deleted all the tags using the following command-(This was just a test environment so I didn't mind deleting it)

sh.removeShardTag("shard0000", "INDIA")

2.Removed the tag from the config db 'tags' collection.

3.Added a fresh range tag using the following command-

sh.addShardTag("shard0001", "INDIA")

4.Added the tag range-

sh.addTagRange("demo.incoming_request_log", { regionId: 5 }, { regionId: 9 }, "INDIA")

Note-I wanted to route all the request to shard1 where regionId was the tagged shard key.So now all requests from Mongos to demo database and incoming_request_log in which regionId is between 5 and 9 goes to shard1 and other's to shard0.

ratr
  • 606
  • 1
  • 9
  • 24
0

1) How many data do you have on shard 1? May be the maximum chunk size has not been reached yet, then there is no Need to shard to 2 shards. Default chunk size is 64 MB. 2) You can specify that you want to Keep docs with certain shard keys on a certain shard, see: addShardTag (give a shard a name) addTagRange (assign shard key range to shard)

Astrid
  • 16