0

Mongo docs list this three-member configuration: primary, secondary, arbiter, as the minimal architecture of a replica set.

Why would an arbiter be necessary there? If the primary fails, the secondary won't see the heartbeat, so it needs to become primary. In other words, why wouldn't a primary + secondary configuration be sufficient? This related question doesn't seem to address the issue, as it discusses larger numbers of nodes.

Community
  • 1
  • 1
Dan Dascalescu
  • 143,271
  • 52
  • 317
  • 404
  • 1
    Really should be on [dba.stackexchange.com](http://dba.stackexchange.com) as this is not a programming question. The reason though is that 2 members with one vote each do not form a majority in order to elect a "Primary". So there needs to be an odd number of nodes that in the event of a single point of failure a majority still exists. This is well covered in the official documentation. – Neil Lunn Jun 19 '14 at 05:27
  • @NeilLunn: this is not a duplicate of the question in the close reason, as I have already pointed by linking to that "related" question. Also, there are no "two members" (with data) left in the architecture described in that my question. – Dan Dascalescu Jun 19 '14 at 17:11
  • Sorry that was me, I misread. – Sammaye Jun 19 '14 at 21:17
  • I never said this was a duplicate, just that it was off topic for stack overflow. – Neil Lunn Jun 19 '14 at 21:25

2 Answers2

2

Suppose you have only two servers, one primary and one secondary.

If suddenly the secondary can not reach the primary server it could be that the primary is down (in that case the secondary should become primary) but it could be as well a network issue that isolated the secondary (this the secondary is the one that is in deed down).

however, if you have an arbiter and the secondary cannot reach the primary but it CAN reach the arbiter then the issue is with the primary so it must become the new primary. If it CANNOT reach the primary, nor the arbiter, then the secondary knows that the issue is that he is isolated/broken -poor secondary :(- so he must not become the primary

Enrique Fueyo
  • 3,358
  • 1
  • 16
  • 10
1

If you bring the Arbiter down to its core it is essentially a none-data holding member used for voting.

One case for an Arbiter is as I state in the linked question: Why do we need an 'arbiter' in MongoDB replication? to break the problems of CAP but that is not its true purpose since you could easily replace that Arbiter with a data holding node and have the same effect.

However, an Arbiter will have a few benefits:

  • Small footprint
  • No data
  • No need to synch
  • can instantly vote
  • can be put literally anywhere in your network, app server or even another secondary to boost that part of your network (this comes into partitions).

So an Arbiter is extremely useful, even on one side of a partition (i.e. you have no partitioning in your network).

Now to explain base setup. An Arbiter would NOT be required, you could factor it out for a data holding node, but 3 data holding nodes is not the minimum (that is the minimum you need to keep automatic failover), 2 data holding nodes and 1 Arbiter is actually the minimum.

Now to answer:

In other words, why wouldn't a primary + secondary configuration be sufficient?

Because if one of those goes down there is only 50% of the vote left (2-1 = 1) and 50% is not classed as a sufficient majority for MongoDB to actually vote in a member (judged by the total configured voteable members in your rs.config).

Also in this case MongoDB does not actually know if that last member is the last member. It needs other members to tell it otherwise.

So yes, this is why you need a third guy.

Community
  • 1
  • 1
Sammaye
  • 43,242
  • 7
  • 104
  • 146