2

I am testing 2 node drbd cluster in virtualbox . I have attached a virtual disk (/dev/sdb/) and mounted it on /mnt/drbd0 directory. While creating a resource by drbdadm create command I am getting resource (/dev/sdb1) is busy error. there is no active process associate with /mnt/drbd0.

  • Depending on what exactly you're doing the problem might simply be that the device is mounted and that you need to prepare it as DRBD before doing so... (But I haven't touched DRBD in ages...) – HBruijn Oct 01 '19 at 11:20
  • 1
    To add to HBruijn's comment. It does sound like you have /dev/sdb mounted at /mnt/drbd0. You need to first create the DRBD resource. Then DRBD and udev will automatically make a /dev/drbd0 device. You'll then want to mount the /dev/drbd0 device at /mnt/drbd0. – Dok Oct 01 '19 at 15:58
  • @HBruijn I Followed https://www.theurbanpenguin.com/drbd-pacemaker-ha-cluster-ubuntu-16-04/. I am new to drbd could you please tell me if the workflow is right I am getting the error while I am creating the resource – Umesh Upadhyay Oct 03 '19 at 14:53

1 Answers1

1

You shouldn't mount the attached block device, /dec/sdb. Instead, you should attach it as you already have, and then while unmounted create the DRBD metadata on both nodes, bring up the DRBD device on both nodes, pick a node and promote the DRBD device to Primary on that node, and finally create and mount the resulting /dev/drbdX device there.

To get there from where you currently are, you'll need to first un-mount the block device on both nodes:

# umount /dev/sdb

Then, follow the steps I outlined in the summary above:

On both nodes (substitute with the name of your DRBD resource):

# drbdadm create-md <res>
# drbdadm up <res>

You may need to wipe or shrink (shrinking is not supported by xfs) the filesystem to make room for DRBD's metadata if the create-md refuses to overwrite the filesystem. If you have no important data since you're just testing, I would just wipe it: # wipefs -a /dev/sdb

You should then see something like this in the output of drbdadm status:

r0 role:Secondary
  disk:Inconsistent
  node-b role:Secondary
    peer-disk:Inconsistent

If the nodes are stuck in a Connecting state, check the IPs in your configuration file and your nodes' firewall rules.

Once you see they are Inconsistent/Inconsistent, you can choose one of your nodes to become Primary and start the initial sync. Then, on that same node, (re)create your filesystem and mount the DRBD device:

# drbdadm primary <res> --force
# mkfs.ext4 /dev/drbd0 # or some other filesystem
# mount /dev/drbd0 /mnt/drbd0

Then anything you write into /mnt/drbd0 will be replicated to the peer. To test that everything is working you could unmount the device from the Primary node, demote the device to secondary (# drbdadm secondary <res>), promote the device toPrimary` and mount it on the peer, and you should see that your filesystem has been replicated.

Note that you shouldn't use the --force flag under any normal circumstances; only when you are creating a new DRBD device or recreating the metadata for an existing device.

Matt Kereczman
  • 1,899
  • 9
  • 12