2

I pretty much followed the same general idea as here: Resizing a partition

-Resize VMware disk: Through vSphere, resize disk from 100GB to 300GB
(Reboot VM)
-Delete partition
(fdisk /dev/sdb, d, 1)
-Recreate partition
(While still in the same fdisk session with /dev/sdb, n, p, 1, <defaults>)
(Reboot VM)

Unfortunately, now the XFS FS will no longer mount.

I'm basically getting a "bad superblock" error. What I'm looking around is where does the SB actually reside? Is it in a partition or at the very beginning of the disk?

Now:

enter image description here

When I try an xfs_repair -n, it scans for quite some time and eventually gives up.

xfs_repair -n /dev/sdb1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.....<> .....
found candidate secondary superblock
unable to verify superblock continuing...
.....<> .....
Sorry, could not find valid secondary superblock
Existing now.

When I deleted and recreated the partition, should I have noted the starting position? What I now notice is that partition 1 seems to default to a start of 2048, but what I noticed on similar systems, is a start of 63.

Yes, I didn't think that recording the start of the old partition before deleting it was important. It never came up in all of my recent searching and it is perhaps the key here.

Perhaps my original superblock is in the 63-2048 range? I've copied the VM so that I can try a few things without toying too much with the original VM. Unfortunately, that copy was taken after I broke the original.

UFS Explorer https://www.ufsexplorer.com/ufs-explorer-standard-recovery.php, which came up during searches, sees the XFS file system and seemingly all its contents (via a scan of the VMDK).

Marco Shaw
  • 407
  • 3
  • 11
  • Show us what *you* did (as detailed as possible), and show exact error messages. Your question should be able to stand on it's own. – Sven Feb 20 '19 at 19:03
  • If you are only working on a copy of the VM anyway, just delete it and try again. Note the starting block of the partition this time (I've never seen a guide that did NOT mention this step). Or look up the partition table on the original VM and recreate the partition again. – Gerald Schneider Feb 22 '19 at 14:27
  • @GeraldSchneider Sorry, I meant I took a copy of the VM *after* breaking it. I tried to give it a start of 63 on the copy, but perhaps because of the new size geometry(?), it will not allow anything lower than 2048. I've updated the post as best I can. I don't know how to have it taken "off hold", if I've added enough information. – Marco Shaw Feb 22 '19 at 14:38
  • Well, you'll need to find the first block of that partition. Personally, I'd just rebuild the VM from the backup at this point. – Gerald Schneider Feb 22 '19 at 14:51

2 Answers2

2

You really should have recorded the partition sector start number. At this point, do not touch the filesystem itself without first reconstructing the right partition layout.

You can manually check for the MBR magic number (0xAA55) or, even better, use testdisk to recover your partition table.

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • I'm going to do some additional testing, but testdisk was able to recover (and write) a functional partition table. – Marco Shaw Feb 22 '19 at 18:20
1

The actual root cause of the failure... The disk had a DOS partition table, which got wiped out. Certain newer versions of fdisk require you run it with the option -c=dos, and it appears that will be removed altogether in the future-ish.

Once I Googled for "fdisk start sector 63 2048" (which basically auto-completed!), it all became more clear.

https://superuser.com/questions/352572/why-does-the-partition-start-on-sector-2048-instead-of-63

Marco Shaw
  • 407
  • 3
  • 11