0

I had two 1T HDD and a 500GB RAID1 LVM LV across them. Then I added another 2T HDD, and converted the LV to RAID5 using

pvcreate /dev/sdd
vgextend ubuntu-vg /dev/sdd
lvconvert --type raid5 --stripes 2 ubuntu-vg/ubuntu-lv

But now, when I list all LVs, there are two SubLVs, with the same name, on the same PV:

# lvs -a -o name,segtype,devices,size,path
  LV                   Type   Devices                                                           LSize    Path
  btest                linear /dev/sdb(128002)                                                    10.00g /dev/ubuntu-vg/btest
  ctest                linear /dev/sdc(128002)                                                    10.00g /dev/ubuntu-vg/ctest
  dtest                linear /dev/sdd(128002)                                                    10.00g /dev/ubuntu-vg/dtest
  ubuntu-lv            raid5  ubuntu-lv_rimage_0(0),ubuntu-lv_rimage_1(0),ubuntu-lv_rimage_2(0) 1000.00g /dev/ubuntu-vg/ubuntu-lv
  [ubuntu-lv_rimage_0] linear /dev/sdb(128001)                                                   500.00g
  [ubuntu-lv_rimage_0] linear /dev/sdb(1)                                                        500.00g
  [ubuntu-lv_rimage_1] linear /dev/sdc(128001)                                                   500.00g
  [ubuntu-lv_rimage_1] linear /dev/sdc(1)                                                        500.00g
  [ubuntu-lv_rimage_2] linear /dev/sdd(128001)                                                   500.00g
  [ubuntu-lv_rimage_2] linear /dev/sdd(1)                                                        500.00g
  [ubuntu-lv_rmeta_0]  linear /dev/sdb(0)                                                          4.00m
  [ubuntu-lv_rmeta_1]  linear /dev/sdc(0)                                                          4.00m
  [ubuntu-lv_rmeta_2]  linear /dev/sdd(0)                                                          4.00m

([b-d]test was created after the conversion to test read/write speed on individual HDDs)

For example /dev/sdb(1) and /dev/sdb(128001) are both named ubuntu-lv_rimage_0 and having 500G of size, but pvs shows only 500G allocated on /dev/sdb:

# pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/sdb   ubuntu-vg lvm2 a--  931.51g <421.51g
  /dev/sdc   ubuntu-vg lvm2 a--  931.51g <421.51g
  /dev/sdd   ubuntu-vg lvm2 a--   <1.82t    1.32t

Is it normal? If not, how can I correct it?


EDIT: I think they are two segments of one LV, it's only the output format of lvs a little confusing. The second segment is created for converting RAID1 to RAID5. How can I delete it?

# lvs -a -o name,seg_pe_ranges ubuntu-vg/ubuntu-lv_rimage_0
  LV                   PE Ranges
  [ubuntu-lv_rimage_0] /dev/sdb:128001-128001
  [ubuntu-lv_rimage_0] /dev/sdb:1-127999
yume_chan
  • 101
  • 2
  • Did the conversion completed successfully? The conversion is done internally by creating and synching a mirror, and it should remove old (redundant) volume by itself after it is completed. – Nikita Kipriyanov Jun 15 '22 at 05:18
  • @NikitaKipriyanov Yes, `lvs` shows 100% sync progress. I remember seeing man page of `lvmraid` saying converting RAID1 to RAID5 will extend the LV by 1 extent for im-placing copying, so I think the "duplicate" I found is that 1 extent, just as a separated segment. – yume_chan Jun 15 '22 at 08:56
  • Please don't use R5, it's been dangerous for well over a decade now, nobody really uses it, not on >1TB HDD's anyway - R1/10/6/60/Z are the only games in town. – Chopper3 Jun 15 '22 at 09:00
  • I think 100% mirror sync state in this case does not indicate the conversion command execution path was fully completed. It installed mirror, and waited for it to complete. If you, say, reboot the machine during this process, the mirror will later resume synchronization and eventually it will be complete, but the controlling command will be interrupted and the conversion from R1 to R5 will not be complete. I think this is probably how you ended up in this state. I just wanted to know if something like this actually happened, and if it was, what it was exactly. – Nikita Kipriyanov Jun 15 '22 at 09:49
  • @NikitaKipriyanov I'm not sure what else can cause an issue. The conversion process took more than 6 hours, but there weren't any interruptions. – yume_chan Jun 15 '22 at 16:37

0 Answers0