2

I'm playing with Ceph in Vagrant environment & trying to create some minimal cluster. I have two nodes: 'master' & 'slave' Master as admin, monitor, manager. Slave for OSD.

I'm following the official ceph deploy guides & facing the problem with OSD creation. On slave node I created some 10Gb loop device & mounted it to /media/vdevice then on master node I've tried to create OSD:

ceph-deploy osd create slave1:loop0

It fails with:

...
[slave1][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 956, in verify_not_in_use
[slave1][WARNIN]     raise Error('Device is mounted', dev)
[slave1][WARNIN] ceph_disk.main.Error: Error: Device is mounted: /dev/loop0
[slave1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/loop0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

In case of unmounted loop0 it fails with:

[slave1][WARNIN] ceph_disk.main.Error: Error: /dev/loop0 device size (0M) is not big enough for data
[slave1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/loop0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

Which is make sense as the actual storage is not binded to the system. So how can we prepare the storage for OSD?

0xF2
  • 314
  • 3
  • 17
Silk0vsky
  • 941
  • 1
  • 18
  • 34
  • 1
    When unmounting `loop0` did you also execute `losetup -d /dev/loop0`? Could you try running `losetup /dev/loop0 /your/10GB/file` again and check if ceph-deploy would fail with the same message? – Dima Chubarov Feb 14 '18 at 10:35
  • 1
    @DmitriChubarov thanks for the hint! In my case I've just mounted the file to mount point but haven't associated it with loop0. So I've just unmount it & executed the 'sudo losetup /dev/loop0 vdrive.img'. After that OSD has been created. – Silk0vsky Feb 14 '18 at 11:11
  • Ok, I'd write it up as an answer then. It seems to be something that other people might encounter. – Dima Chubarov Feb 14 '18 at 12:29

1 Answers1

2

Ceph requires a block device on an OSD. To turn a disk image file into a loopback block device you can use the losetup utility.

sudo losetup /dev/loop0 /your/10GB/file.img

This command attaches the disk image file to the /dev/loop0 device node creating a loopback block device that can be used with Ceph.

If you need to detach the image file from the device node you can execute

sudo losetup -d /dev/loop0

Note that by default Ceph reserve 100 MB per device, so you have to make sure that your image file size is greater than that. You could create an suitable image file with

dd if=/dev/zero of=/cephfs/vdisk.img count=1 bs=10G
Dima Chubarov
  • 16,199
  • 6
  • 40
  • 76