0

Recently I added a second ssd block device tot my Pop!_OS 22.04 system, encrypted it and added it to the volume group. In the process I decided to resize the root logical volume (the only volume). However instead of reducing it by 1GB I resized it to 1GB. Ofcourse everything came to a screeching halt although I still had a working login session open for a while. Alas my wife decided switch the PC off due to the excessive fan noise it was producing (i was sitting upstairs).

So now I'm trying to recover from this mishap. I have already rolled back part of the problem by resizing the root logical volume to the maximum available space although i'm not sure this was the size before the problems. Now I am faced with the following steps to try and recover but I am not sure how to go about it. Some investigation using the usual btrfs tools show there is hope despite the error messages at the bottom of this post.

So how do I preceed?

Some info:

root@recovery:~# cryptsetup luksOpen /dev/nvme1n1p3 cryptdata
Enter passphrase for /dev/nvme1n1p3: 
root@recovery:~# cryptsetup luksOpen /dev/nvme0n1p2 cryptdata2
Enter passphrase for /dev/nvme0n1p2: 

root@recovery:~# lvscan
  ACTIVE            '/dev/data/root' [1.85 TiB] inherit

root@recovery:~# pvscan
  PV /dev/mapper/cryptdata    VG data            lvm2 [<945.36 GiB / 0    free]
  PV /dev/mapper/cryptdata2   VG data            lvm2 [953.36 GiB / 1.00 GiB free]
  Total: 2 [1.85 TiB] / in use: 2 [1.85 TiB] / in no VG: 0 [0   ]

root@recovery:~# lvs
  LV   VG   Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root data -wi-a----- 1.85t                                                    

root@recovery:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/data/root
  LV Name                root
  VG Name                data
  LV UUID                m3vSXH-4l2W-OELY-eDJx-VhTZ-YBCX-J6bunm
  LV Write Access        read/write
  LV Creation host, time pop-os, 2022-01-14 11:58:17 +0000
  LV Status              available
  # open                 0
  LV Size                1.85 TiB
  Current LE             485817
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
root@recovery:~# mount /dev/mapper/data-root /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/mapper/data-root, missing codepage or helper program, or other error.

OK i re-resized the logical volume to occupy all available space (not documented here). I tried to mount it which fails.

A filesystem check comes back with this:

root@recovery:~# btrfs check /dev/data/root
Opening filesystem to check...
checksum verify failed on 5050639073280 wanted 0x6fd83112 found 0xf6c05028
checksum verify failed on 5050639073280 wanted 0x6fd83112 found 0xf6c05028
bad tree block 5050639073280, bytenr mismatch, want=5050639073280, have=1967817037638947863
ERROR: cannot read chunk root
ERROR: cannot open file system

Also tried this:

root@recovery:~# btrfs rescue chunk-recover /dev/data/root
Scanning: DONE in dev0                         
no recoverable chunk
Chunk tree recovered successfully

root@recovery:~# btrfs rescue super-recover /dev/data/root
Make sure this is a btrfs disk otherwise the tool will destroy other fs, Are you sure? [y/N]: y
checksum verify failed on 5050639073280 wanted 0x6fd83112 found 0xf6c05028
ERROR: cannot read chunk root
Failed to recover bad superblocks

0 Answers0