0

We deploy a MongoDB Sharded Cluster on AWS with 3 Shards. Each shard has a replicaSet with 3 members, 1 config server and 3 mongos.

[ shard 1 ] [ shard 2 ] [ shard 3 ]
[ mongos0 ] [ mongos1 ] [ mongos2 ]
[ config0 ] [ config1 ] [ config2 ]
[ (p) md0 ] [ (s) md0 ] [ (s) md0 ]
[ (s) md1 ] [ (p) md1 ] [ (s) md1 ]
[ (s) md2 ] [ (s) md2 ] [ (p) md2 ]

and here how shards are distributed on the 3 EC2 instances

[   md0   ][   md1   ][   md2   ]
[ shard 1 ][ shard 1 ][ shard 1 ]
[ shard 2 ][ shard 2 ][ shard 2 ]
[ shard 3 ][ shard 3 ][ shard 3 ]
[ mongos0 ][ mongos1 ][ mongos2 ]
[ config0 ][ config1 ][ config2 ]

You can see that the replicaSet members are not on the same instance.

But when for example md0 goes down, our API (Parse Server) loose connection with the Cluster, althougth md1 and md2 are still alive and already elected a new primary. All EC2 instances hold a copy of all collections including sharded collections thanks to replication.

the connection string looks like this

mongodb://user:password@mongos0:27017,mongos1:27017,mongos2:27017/mydb

do you think is it an issue with the configuration of the Cluster/Shard or is it a problem related to the API, which keep connection instead of trying to connect to the other mongos ?

Vince Bowdren
  • 8,326
  • 3
  • 31
  • 56
Eltorrooo
  • 157
  • 2
  • 15
  • Can you show the connection string which your API (Parse Server) is using to connect? Possibly it isn't connecting to the replica set properly. – Vince Bowdren Oct 04 '16 at 10:32
  • I'm having the same issue - it seems to be an issue with mongoose and possible bug.https://github.com/Automattic/mongoose/issues/3634 – James O'Brien Jan 21 '17 at 10:19

0 Answers0