3

I'm having some troubles when configuring pacemaker+corosync between Centos 5 and Centos 6. Here is ouput of crm_mon:

On node1:

Last updated: Sun Jul 21 19:02:21 2013
Last change: Sun Jul 21 18:14:48 2013 via crmd on svr077-53149.localdomain
Stack: openais
Current DC: svr077-53149.localdomain - partition WITHOUT quorum
Version: 1.1.8-2.el5-394e906
2 Nodes configured, 2 expected votes
1 Resources configured.


Online: [ svr077-53149.localdomain ]
OFFLINE: [ svr423L-2737.localdomain ]

Crond   (lsb:crond):    Started svr077-53149.localdomain

On node2:

Last updated: Sun Jul 21 19:03:40 2013
Last change: Sun Jul 21 18:14:56 2013
Stack: classic openais (with plugin)
Current DC: NONE
1 Nodes configured, 2 expected votes
0 Resources configured.


ONLINE: [ svr423L-2737.localdomain ]

Here is my corosync log

My question is:

  • Why each node has their own DC, node1 detects two nodes while node2 show only one?
  • What can be problem make two node can't join cluster?
  • Can we make pacemaker+corosync between Centos 5 and Centos 6?

Here is my software version in two node:

Node1:

 - Corosync version 1.4.3 
 - Pacemaker version 1.1.8-2.el5 
 - Centos release 5.8 (Final)

And

Node2:
 - Corosync version 1.4.1
 - Pacemaker version 1.1.8-7.el6
 - Centos release 6.4 (Final)

UPDATE When I configured the first time, everything worked ok. After I shutdown node 1, turn on it to test failover case, this problem occured.

cuonglm
  • 2,386
  • 2
  • 16
  • 20

1 Answers1

0

If you are using multicast, then check your igmp support in your switch and in your cluster hosts check the corosync membershit status

corosync-cfgtool -s corosync-cmapctl | grep mem

Thanks

c4f4t0r
  • 5,301
  • 3
  • 31
  • 42
  • The first time I config, everything works fine. When I shutdown one node to test failover case, the problem occurs. – cuonglm Jul 28 '13 at 06:12