2

I am using cephadm. I had a test cluster up and running. I re-installed the OS and during installation nuked the drives by deleting the LVM volume groups and partitions. Now lsblk shows the devices to be present but ceph orch device ls returns nothing. I had thought that the problem was residual partition tables or LVM data so I have tried the following:

  • cephadm ceph-volume lvm zap --destroy /dev/sda => error: argument DEVICES: invalid
  • ceph orch device zap host1 /dev/sdc --force => Error EINVAL: Device path '/dev/sda' not found on host 'host1'
  • wipefs -fa /dev/sda
  • dd if=/dev/zero of=/dev/sda bs=1M count=1024
  • sgdisk --zap-all /dev/sda

None of these worked. Any suggestions? Any help much appreciated.

Tintin
  • 121
  • 2
  • What is the output of `lsblk` on host1? – eblock Feb 13 '23 at 08:53
  • As I said, `lsblk` shows the drives to be present. I can even mount them after `mkfs.ext /dev/sda`. There is nothing wrong with the drives themselves. – Tintin Feb 14 '23 at 10:28

2 Answers2

0

I solved this in the end with dd if=/dev/zero of=/dev/sda bs=1M and after a few hours the drive became available to Ceph. Presumably there is some data structure written further into the drive.

Tintin
  • 121
  • 2
0

I'm experiencing the same issue with some disks that have data written on them. When I run the lsblk command, the disk shows multiple partitions:

sda 8:0 0 7.3T 0 disk
├─sda1 8:1 0 1.9T 0 part
└─sda2 8:2 0 1.9T 0 part

I've tried several wipe commands:

wipefs -a /dev/sda --force
shred -n 1 -s -v /dev/sda

However, none of them worked when I wanted to zap my disk. I found a solution to fix this! I reformatted the disk with fdisk to an empty gpt partition:

fdisk /dev/sda
  1. Remove all partitions with d
  2. Create a new empty gpt partition with g
  3. Save the changes with w

After saving, Ceph immediately recognized the disk.

Then I can zap the disk using the following command: ceph orch device zap <hostname> /dev/sda --force

Thomas Coche
  • 175
  • 1
  • 5