My MongoDB Sharded Cluster has 3 shards with each shard running on 3 replicas. To summarize:
Config Server:
shardcfg1.server.com:27018
shardcfg2.server.com:27018
shardcfg3.server.com:27018
Shard1:
shard11.server.com:27000 (P)
shard12.server.com:27000 (S)
shard13.server.com:27000 (S)
Shard2:
shard21.server.com:27000 (S)
shard22.server.com:27000 (STARTUP)
shard23.server.com:27000 (Unhealthy - invalidReplicaSetConfig: Our replica set configuration is invalid or does not include us)
Shard3:
shard31.server.com:27000 (S)
shard32.server.com:27000 (P)
shard33.server.com:27000 (S)
If you see the state above the problem lies in SHARD2
.
- No Primary in
SHARD2
- How did the replica set config marked
shard23.server.com
as not a member
The secondary shard21.server.com
can be used to get the dump so potentially there is no data loss. However, I have no clue whatsoever about how do I stabilize the cluster again?
How would I remove the SHARD2
completely from the cluster? Or How should I reinitialize the shard with the same servers again?