I just initialized a ceph instances within two differents servers
cluster 241b5d19-15f5-48be-b98c-285239d70038
health HEALTH_WARN
64 pgs degraded
64 pgs stuck degraded
64 pgs stuck unclean
64 pgs stuck undersized
64 pgs undersized
monmap e3: 2 mons at {serv1=10.231.69.9:6789/0,serv2=10.231.69.34:6789/0}
election epoch 6, quorum 0,1 serv1,serv2
mgr no daemons active
osdmap e10: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
68292 kB used, 1861 GB / 1861 GB avail
64 active+undersized+degraded
with only mon
and osd
(I do not setup mds
, rgw
or CephFS
).
I would use rbd
to create a persisted shared storage for containers volumes, but I'm really confused on how to plug my osd
inside docker.
I saw some rbd docker plugins exists:
- https://github.com/yp-engineering/rbd-docker-plugin
- https://github.com/AcalephStorage/docker-volume-ceph-rbd
- https://github.com/contiv/volplugin
But none seems to be compatible with latest docker version or at least >= 1.13
.
Thus I'm asking myself how can I achieve what I want, some solutions come on my mind but I'm really not sure which is best (or even if it's possible).
- Use
CephFS
+ standard docker filesystem mounting volume - Use
rexray
(flocker
is no more maintained) - Install Ceph Object Gateway S3 and use existing docker
S3
plugins - Other?
But 1. solution seems to be inelegant and will be harder to manage in larger environment (more than 2 servers).
Whereas 2. solution seems to be a great starting point but anyone have feedback?