0

I have a setup of 2 node pacemaker which is having 2 VIPs of Resource type ocf::heartbeat:IPaddr2

VIP1 : This VIP is not expected to auto failover so this resource type is unmanaged

VIP2 : This VIP is expected to auto failover so is kept as managed

Issue : we had a case of network issue for 3 mins and in this case

VIP1 : vip which we were using for VIP1 was released for the host and did not come back automattically even afer the network was fixed , resource was marked as Stopped , so ip which we were using for VIP1 did neither exist on host1 or host2 .

VIP2 : in this case the ip came back on the node and resource also was started back.

We do not want resource VIP1 to release the IP even when the resource is unmanaged.

`[root@osboxes1 ~]# pcs config
 Cluster Name: test-cluster
 Corosync Nodes:
 osboxes1 osboxes
 Pacemaker Nodes:
 osboxes osboxes1

 Resources:
 Resource: VIP2 (class=ocf provider=heartbeat type=IPaddr2)
 Attributes: ip=192.168.50.54 nic=enp0s3:2 cidr_netmask=19
 Operations: start interval=0s timeout=20s (VIP2-start-interval-0s)
 stop interval=0s timeout=20s (VIP2-stop-interval-0s)
 monitor interval=20s (VIP2-monitor-interval-20s)
 Resource: VIP1 (class=ocf provider=heartbeat type=IPaddr2)
 Attributes: ip=192.168.50.53 nic=enp0s3:1 cidr_netmask=19
 Meta Attrs: is-managed=false
 Operations: start interval=0s timeout=20s (VIP1-start-interval-0s)
 stop interval=0s timeout=20s (VIP1-stop-interval-0s)
 monitor interval=20s (VIP1-monitor-interval-20s)

 Stonith Devices:
 Fencing Levels:

 Location Constraints:
 Resource: VIP1
 Enabled on: osboxes (score:50) (id:location-VIP1-osboxes-50)
 Resource: VIP2
 Enabled on: osboxes1 (score:50) (id:location-VIP2-osboxes1-50)
 Ordering Constraints:
 Colocation Constraints:
 Ticket Constraints:

 Alerts:
 No alerts defined

 Resources Defaults:
 resource-stickiness: 100
 Operations Defaults:
 No defaults set

 Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: test-cluster
 dc-version: 1.1.15-11.el7_3.4-e174ec8
 have-watchdog: false
 no-quorum-policy: ignore
 stonith-enabled: false

 Quorum:
 Options:`
Aliaxander
  • 2,547
  • 4
  • 20
  • 45
codeninja
  • 1
  • 1

1 Answers1

0

As far as I get your setup correctly, try to remove VIP1 resource completely from cluster, cause there is no point of adding it to cluster cause your cluster doesn't manage it.

 Resource: VIP1 (class=ocf provider=heartbeat type=IPaddr2)
 Attributes: ip=192.168.50.53 nic=enp0s3:1 cidr_netmask=19
 Meta Attrs: is-managed=false