0

I am trying to deploy the all-in-one configuration using kolla-ansible with ceph enabled

enable_ceph: "yes"
#enable_ceph_mds: "no"
enable_ceph_rgw: "yes"
#enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
#enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"

And, my setup consists of a Virtual Box VM with Ubuntu 18.04.4 desktop version with 2 CPU cores, 30 GB Disk (single disk), 2GB RAM, the partitioning type is msdos.

ansible version==2.9.7

kolla-ansible version==9.1.0

In order to install ceph OSD using kolla-ansible i read that a partition should have the name KOLLA_CEPH_OSD_BOOTSTRAP_BS.

Hence, i created root partition with 20 GB i.e /dev/sda1 and then an extended partition /dev/sda2 for the rest 20GB and followed by two logical partitions (/dev/sda5 and /dev/sda6) each of 10GB for OSD. But in msdos type partitioning there is no feature to allocate name to partitions.

So my questions are:

  1. How do I go about labeling the partition in case of msdos type partition in order for kolla-ansible to recognize that /dev/sda5 and /dev/sda6 is designated for Ceph-OSD ?
  2. Is it a must to have a separate storage drive than the one containing Operating System for Ceph OSD (i know its not recommended to have all in single disk) ?
  3. How do I have to provision my single drive HD space in order to install Ceph-OSD using kolla-ansible?

P.S: I also tried to install ceph using kolla-ansible using an OpenStack VM (4 CPU cores, 80GB disk space - single drive, as i didn"t install Cinder in my OpenStack infra.) and Ubuntu 18.04.4 Cloud image, which has GPT partition type and supports naming partitions. The partitions were as follows:

/dev/vda1 for root partition

/dev/vda2 for ceph OSD

/dev/vda3 for ceph OSD

But the drawback was that, kolla-ansible wiped out complete disk and resulted in failure in installation.

Any help is highly appreciated. Thanks a lot in advance.

Skyprenet
  • 60
  • 5

1 Answers1

1

I also had installed an Kolla-Ansible single-node all-in-one with Ceph as storage backend, so I had the same problem.

Yes, the bluestore installation of the ceph doesn't work with a single partition. I had also tried different ways of labeling, but for me it only worked with a whole disk, instead of a partition. So for your virtual setup create a whole new disk, for example /dev/vdb.

For labeling, I used the following as bash-script:

#!/bin/bash
DEV="/dev/vdb"
(
echo g # create GPT partition table
echo n # new partiton
echo   # partition number (automatic)
echo   # start sector (automatic)
echo +10G # end sector (use 10G size)
echo w # write changes
) | fdisk $DEV
parted $DEV -- name 1 KOLLA_CEPH_OSD_BOOTSTRAP_BS

Be aware, that DEV at the beginning is correctly set for your setup. This creates a new partiton table and one partition on the new disc with 10GB size. The kolla-ansible deploy-run register the label and wipe the whole disc, so the size-value has nothing to say and is only for the temporary partition on the disc.

One single disc is enough for the Ceph-OSD in kolla-ansible. You don't need a second OSD. For this, add the following config-file in your kolla-ansible setup in this path, when you used the default kolla installation path: /etc/kolla/config/ceph.conf with the content:

[global]
osd pool default size = 1
osd pool default min size = 1

This is to make sure, that there is only one OSD requested by kolla-ansible. If your kolla-directory with the globals.yml is not under /etc/kolla/, you have to change the path of the config-file as well.

Solution for setup with one single disc with multiple partitions is to switch the storage-type of the ceph-storage in the kolla-ansible setup from bluestore to the older filestore OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive . With filestore you need one parition with the label KOLLA_CEPH_OSD_BOOTSTRAP_FOO and a small journal-partition with label KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J (the FOO in the name is really required...). To be able to switch your kolla-installation to filestore OSD type, edit all-in-one file's [storage] section by adding ceph_osd_store_type=filestore next to the host as follows, to override the default bluestore.

[storage]
localhost       ansible_connection=local ceph_osd_store_type=filestore

The above method has been tested with ansible==2.9.7 and kolla-ansible==9.1.0 and OpenStack Train release and prior releases.

Tobias
  • 543
  • 1
  • 3
  • 15
  • your answer holds good if i have a spare storage disk to use as OSD, but have you tried with single storage disk? Any ideas there? – Skyprenet May 15 '20 at 10:56
  • Yes, I also tried it with a single disc with multiple partitions, but this doesn't works for `bluestore`-storage in ceph in kolla-ansible. The only solution I have for you, if you really really only want to have one disc with multiple partitions, then switch the storage-type of the ceph-storage in the kolla-ansible setup from `bluestore` to the older `filestore` OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive . – Tobias May 15 '20 at 11:16
  • With `filestore` you need one parition with the label `KOLLA_CEPH_OSD_BOOTSTRAP_FOO` and a small journal-partition with label `KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J` (the `FOO` in the name is really required...). I'm not totally sure, but you should be able to switch your kolla-installation to filestore, when you add `ceph_osd_store_type: "filestore"` to the ceph-section in your `globals.yml`, to override the default bluestore. – Tobias May 15 '20 at 11:21
  • added the content of my last comments to my answer-text above – Tobias May 15 '20 at 11:52
  • @Skyprenet hope you have seen my comments and I hope they solved your problem. – Tobias May 15 '20 at 15:29
  • I tried the `filestore` method, but it didnt work either. Also, i believe even in `filestore` method one needs two separate disks. Moreover, i also had a look into kolla-ansible settings and in order to use filestore one needs to label the partition as `KOLLA_CEPH_OSD_BOOTSTRAP`. In both methods, the entire disk was wiped out irrespective of partition being labeled or not. – Skyprenet May 18 '20 at 08:11
  • @Skyprenet I had used the `filestore` variant by myself in kolla-ansible 6.x. It worked with the two flags `KOLLA_CEPH_OSD_BOOTSTRAP_FOO` and `KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J` there and had only used the tagged partitions and not the whole disc. BUT since filestore is old and not really used anymore, since bluestore is the default, it is possible, that `filestore` is not really supported in the current release. :( – Tobias May 18 '20 at 08:21
  • I will give it a try once more with `KOLLA_CEPH_OSD_BOOTSTRAP_FOO` and `KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J` tags and let you know. Currently i am on kolla-ansible v9.1.0 – Skyprenet May 18 '20 at 08:26
  • I have updated your instructions with the working setup i just tested. Thanks a lot for all the help. – Skyprenet May 18 '20 at 11:47