2

I added an extra drive for ceph but after zapping the disk, the creation failed because it was being used by a device-mapper. After rebooting it was created properly but when running ceph osd tree I get:

ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 4.53099 root default
-2 3.62700     host mymachine2
 0 0.90399         osd.0          up  1.00000          1.00000
 3 2.72299         osd.3          up  1.00000          1.00000
-3 0.90399     host mymachine4
 1 0.90399         osd.1          up  1.00000          1.00000
 2       0 osd.2                down        0          1.00000

I've read the docs but didn't find a way to remove that "rogue" osd.2

ceph health is not displaying any warnings or errors for now. Any suggestions?

parik
  • 2,313
  • 12
  • 39
  • 67
FerGC
  • 43
  • 6

1 Answers1

1

if you try this:

ceph osd crush reweight osd.2 0.0

Then wait for rebalance

ceph osd out 2
service ceph stop osd.2
ceph osd crush remove osd.2
ceph auth del osd.2
ceph osd rm 2

is this resolve the problem ?

Skadia
  • 36
  • 4
  • Yes! More or less... reweight gave the following error `Error ENOENT: device 'osd.2' does not appear in the crush map` Then `service ceph stop osd.2` gave no output, as expected I guess. Crush said osd.2 does not appear in the crush map and auth del said entity does not exist... but `ceph osd rm 2` actually removed it. Thanks! – FerGC Jan 31 '17 at 12:50