2

I guess it's something that I am missing although I have read the GPFS documentation. Let's say we have a four node GPFS setup, with three of them acting as quorum. In GPFS > 3.2 we can perform rolling upgrades, but how will that work when there is no possibility of keeping a quorum; (at some point one of these three nodes will need to be upgraded).

In the internet I saw someone telling that if there is a -1 node from the quorum, the quorum will be maintained, but I haven't seen that in the redbook documentation.

HopelessN00b
  • 53,795
  • 33
  • 135
  • 209
Cobra Kai Dojo
  • 447
  • 2
  • 6
  • 21

2 Answers2

2

So, it seems I found an answer.

In order for a quorum to be available, you need more than half of the nodes that belong to it to be online. That means that on a quorum of 6 nodes needs 4 nodes online, or for example a quorum consisting of 3 nodes needs 2 of them to be online.

Here is the information (go to the Node Quorum part)

Cobra Kai Dojo
  • 447
  • 2
  • 6
  • 21
  • 1
    I don't know about gpfs specifically, but most systems that have a concept of quorum also have a witness or quorum only type node. This node does not handle any workload but it does contribute a vote to the quorum. This could allow you to shut down more than half of your data nodes. – longneck Sep 26 '13 at 01:54
1

In GPFS then half the quorum nodes are required for the cluster to be up. You can indeed perform rolling upgrades on the current versions of GPFS. I have experience and have done this with 3.4 and 3.5, I'm not sure about 3.3 or earlier.

My advice for a 4 node cluster, or any cluster for that matter, would be to take one node at a time and upgrade. if you run mmgetstate -aL you'll see the current quorum state, how many quorum nodes are active and how many are required to keep the cluster up

A note for anyone else looking at this, if you have a 2 node cluster then you should look into tiebreaker disks. This means if a node is down and the remaining quorum node can see the tiebreaker disk then it will keep the cluster alive. This is vital for redundancy in a 2 node cluster but in larger clusters it's better to have 3 or 5 quorum nodes.