1

I have a plain install on machine C with Oracle Linux 9.1 and cockpit and cockpit-machines. On machines A and B I have a ceph cluster configured which defines an rbd block storage pool for VM disks. Having copied a minimal config and the keyrings onto machine C I can "access" the ceph cluster as in the command ceph osd lspools on machine C returns all configured pools as expected.

In the cockpit UI however, the only options I see for configuring a new storage pool, are filesystem and network file system, nothing else.

How can I configure the existing rbd storage pool to be available to new VMs I create in the cockpit UI?

a.ilchinger
  • 121
  • 1
  • 5

2 Answers2

2

I'm not familiar with cockpit but with ceph. Reading the cockpit docs I would probably choose physical disk as source, and the physical disk is a mapped rbd device. If you already have a pool dedicated for rbd usage, I would create one (or more) rbd images in required size:

rbd -p <pool> create -s <size> <name>

Then map that rbd device on the hypervisor, for automatic mapping after boot there's an example file within the /etc/ceph directory:

# cat /etc/ceph/rbdmap 
# RbdDevice             Parameters
#poolname/imagename     id=client,keyring=/etc/ceph/ceph.client.keyring

To enable the map after boot you need to enable the rbdmap service:

# systemctl enable --now rbdmap.service 
● rbdmap.service - Map RBD devices
   Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

When the rbd image is mapped to the hypervisor you should see it in lsblk output as an rbd device, or in /dev/rbd as well:

# lsblk 
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                  11:0    1  458K  0 rom  
rbd0                252:0    0   10M  0 disk

# ls -l /dev/rbd0
brw-rw---- 1 root disk 252, 0  9. Feb 12:18 /dev/rbd0

So from the hypervisor's perspective it's now a local disk which you can use to create storage pools.

eblock
  • 417
  • 2
  • 6
0

While creating rbd storage pools in the Cockpit UI is currently impossible, I found a way of creating the pool for use with libvirt. The pool is properly displayed in the UI and I can also create new volumes there.

  1. Log in to your ceph admin node and create a new client token:

ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=mypool'

  1. Log in to your virtualization host (the machine managed with Cockpit) and configure the pool for libvirt.

virsh secret-define --file secret.xml virsh secret-set-value --secret UUID --base64 "$(ceph auth get-key client.libvirt)" virsh pool-define mypool.xml

  1. The pool should now be visible in the Cockpit UI, can be activated and new volumes can be created.

The configuration files should look similar to those:

<secret ephemeral='no' private='no'>
  <uuid>UID</uuid>
  <usage type='ceph'>
    <name>client.libvirt secret</name>
  </usage>
</secret>
<pool type="rbd">
  <name>mypool</name>
  <source>
    <name>mypool</name>
    <host name='CEPH_MON_IP'/>
    <auth username='libvirt' type='ceph'>
      <secret uuid='UUID'/>
    </auth>
  </source>
</pool>
a.ilchinger
  • 121
  • 1
  • 5