31

Assume we setup a MongoDB replication without arbiter, If the primary is unavailable, the replica set will elect a secondary to be primary. So I think it's kind of implicit arbiter, since the replica will elect a primary automatically.

So I am wondering why do we need a dedicated arbiter node? Thanks!

卢声远 Shengyuan Lu
  • 31,208
  • 22
  • 85
  • 130

4 Answers4

17

I created a spreadsheet to better illustrate the effect of Arbiter nodes in a Replica Set.

enter image description here

It basically comes down to these points:

  1. With an RS of 2 data nodes, losing 1 server brings you below your voting minimum (which is "greater than N/2"). An arbiter solves this.
  2. With an RS of even numbered data nodes, adding an Arbiter increases your fault tolerance by 1 without making it possible to have 2 voting clusters due to a split.
  3. With an RS of odd numbered data nodes, adding an Arbiter would allow a split to create 2 isolated clusters with "greater than N/2" votes and therefore a split brain scenario.

Elections are explained [in poor] detail here. In that document it states that an RS can have 50 members (even number) and 7 voting members. I emphasize "states" because it does not explain how it works. To me it seems that if you have a split happen with 4 members (all voting) on one side and 46 members (3 voting) on the other, you'd rather have the 46 elect a primary and the 4 to be a read-only cluster. But, that's exactly what "limited voting" prevents. In that situation you will actually have a 4 member cluster with a primary and a 46 member cluster that is read only. Explaining how that makes sense is out of the scope of this question and beyond my knowledge.

Bruno Bronosky
  • 66,273
  • 12
  • 162
  • 149
  • The reason is that if those 42 members are non-voting it is counted as you not wanting to use them for data integrity, if I remember right, as such even though you have more members on one side, only voting members are counted, since non-voting members can include all kinds of weird servers that you really don't want part of the group that decides where you want your data to come from – Sammaye Nov 06 '17 at 22:43
  • I agree that if you have "all kinds of weird servers", you wouldn't want them voting. For that reason, I support MongoDB having a concept of non-voting members. But, it seems quite arbitrary for MongoDB to conclude, "if you have 49 servers, 42 of them are weird servers that should vote". It is so arbitrary, that I'm sure that is NOT their reasoning. Therefore I stand by my claim that a voting limit of 7 is unexplained. – Bruno Bronosky Nov 06 '17 at 22:56
  • Maybe, it has been a while since I last talked to MongoDB Inc, so I am not in touch with their current thought patterns – Sammaye Nov 06 '17 at 22:59
  • So If I have 1 primary and 2 secondary I need an arbiter and If I have 1 primary and 3 secondary I don't need an arbiter ? – Mostafa Hussein Jul 22 '19 at 10:38
  • @MostafaHussein you have it backwards. You use an arbiter to avoid even numbers. You don't create even numbers with them. That's the difference between the red and green cells. – Bruno Bronosky Jul 23 '19 at 05:26
  • @BrunoBronosky what if I have 1 primary + 2 secondary and 2 arbieters. will it behave the same as 1 primary + 3 secondary and 1 arbieter right ? It has still 2 fault tolerance right ? – LDropl Nov 14 '19 at 10:18
  • @LDropl I can't imagine a situation where you would want more than 1 arbiter. If your 2 secondaries get isolated from your 1 primary, you want the 2 secondaries to vote a primary between them. But with 2 arbiters, if your 2 secondaries gets isolated, your 1 primary still has a majority and the 2 secondaries will not vote on a new primary. So, by having 2 arbiters, it's like reducing your fault tolerance by 1. – Bruno Bronosky Nov 14 '19 at 12:11
  • The limit on the number of voting members is there because too many voting members will make the election process in RAFT take too much time, and maybe they found that more than 7 voting members cause an unnacceptable recovery time. – Gzorg Jan 09 '21 at 08:15
10

Its necessary to have a arbiter in a replication for the below reasons:

  • Replication is more reliable if it has odd number of replica sets. Incase if there is even number of replica sets its better to add a arbiter in the replication.
  • Arbiters do not hold data in them and they are just to vote in election when there is any node failure.
  • Arbiter is a light weight process they do not consume much hardware resources.
  • Arbiters just exchange the user credentials data between the replica set which are encrypted.
  • Vote during elections,hearbeats and configureation data are not encrypted while communicating in between the replica sets.
  • It is better to run arbiter on a separate machine rather than along with any one of the replica set to retain high availability.

Hope this helps !!!

whoami - fakeFaceTrueSoul
  • 17,086
  • 6
  • 32
  • 46
Jerry
  • 7,863
  • 2
  • 26
  • 35
  • I think 2nd statement needs adjustment, which says : 'Arbiters do have data in them', but mongodb says differently : (Hidden nodes and arbiters are different types of replica set members; arbiters hold no data while hidden nodes replicate from the oplog) - this is what I've got from mongodb documentation..!! – whoami - fakeFaceTrueSoul Jul 01 '19 at 17:11
  • Remarkable: Arbiters do not hold data in them and they are just to vote in election when there is any node failure. – Rejwanul Reja Mar 25 '20 at 03:43
9

This really comes down to the CAP theorem whereby it is stated that if there are equal number of servers on either side of the partition the database cannot maintain CAP (Consistency, Availability, and Partition tolerance). An Arbiter is specifically designed to create an "imbalance" or majority on one side so that a primary can be elected in this case.

If you get an even number of nodes on either side MongoDB will not elect a primary and your set will not accept writes.

Edit

By either side I mean, for example, 2 on one side and 2 on the other. My English wasn't easy to understand there.

So really what I mean is both sides.

Edit

Wikipedia presents quite a good case for explaining CAP: http://en.wikipedia.org/wiki/CAP_theorem

Bruno Bronosky
  • 66,273
  • 12
  • 162
  • 149
Sammaye
  • 43,242
  • 7
  • 104
  • 146
  • 1
    A link to CAP would help. – Dan Dascalescu Jun 19 '14 at 05:10
  • @DanDascalescu done, I wanted to find the link that taught me but I cannot anymore and wikipedia was the only one who didn't use CAP to promote or market something so I have linked them – Sammaye Jun 19 '14 at 06:57
  • So in simple terms, if I have a replica set with three members, and I add an abriter, will this cause a problem / stalemate? Thx – mils Nov 06 '17 at 22:35
  • It is quite possible, however, MongoDB does have certain measures to veto voting rounds to try and make even members actually select a primary, but if you have even members between network partitions there is no easy way around this – Sammaye Nov 06 '17 at 22:41
4

Arbiters are an optional mechanism to allow voting to succeed when you have an even number of mongods deployed in a replicaset. Arbiters are light weight, meant to be deployed on a server that is NOT a dedicated mongo replica, i.e: the server's primary role is some other task, like a redis server. Since they're light they won't interfere (noticeably) with the system's resources.

From the docs :

An arbiter does not have a copy of data set and cannot become a primary. Replica sets may have arbiters to add a vote in elections of for primary. Arbiters allow replica sets to have an uneven number of members, without the overhead of a member that replicates data.

Adil
  • 2,092
  • 3
  • 25
  • 35
  • 1
    You can have an even number on one side of the partition and a primary could be elected – Sammaye Aug 13 '13 at 14:06
  • I assume you're talking about a network split. Sure, when the total number of voting replica members are odd you'll end up with one primary. As long as you've deployed an odd number of members, whether one of them is an arbiter or not doesn't matter. The OP was asking "why do we need a dedicated arbiter node". I'd say if you're dedicating a separate machine for an arbiter, might as well make it a full node. – Adil Aug 13 '13 at 15:25
  • I mean your first line: " when you have an even number of mongods deployed in a replicaset" which isn't entirely true – Sammaye Aug 13 '13 at 15:27
  • Though that being said they are good to deploy on application servers, but yes a separate machine is probably, in 90% of the cases better as a data holding node – Sammaye Aug 13 '13 at 15:34
  • You say 'optional'. However if I am running one primary and one secondary it seems like 'optional' is a loose term. Is it true that without an arbiter in that case a primary won't be elected? – lostintranslation Sep 09 '15 at 10:23
  • @lostintranslation if you have a 2 node RS you most definitely want to add an arbiter. (It's not really "optional" at that point.) An isolated node will not accept writes. – Bruno Bronosky Mar 29 '17 at 14:52