I'm playing with Ceph in Vagrant environment & trying to create some minimal cluster. I have two nodes: 'master' & 'slave' Master as admin, monitor, manager. Slave for OSD.
I'm following the official ceph deploy guides & facing the problem with OSD creation. On slave node I created some 10Gb loop device & mounted it to /media/vdevice then on master node I've tried to create OSD:
ceph-deploy osd create slave1:loop0
It fails with:
...
[slave1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 956, in verify_not_in_use
[slave1][WARNIN] raise Error('Device is mounted', dev)
[slave1][WARNIN] ceph_disk.main.Error: Error: Device is mounted: /dev/loop0
[slave1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/loop0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
In case of unmounted loop0 it fails with:
[slave1][WARNIN] ceph_disk.main.Error: Error: /dev/loop0 device size (0M) is not big enough for data
[slave1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/loop0
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
Which is make sense as the actual storage is not binded to the system. So how can we prepare the storage for OSD?