0

I have a server running ESXi 5.5 hosting several VMs. The host has 2 datastores, one within the local disk, one on an external SAN. Suddenly we had a problem and restarting the server we were not able anymore to map the internal datastore (it was not listed in the datastore list) and the VMs on the internal datastore are inaccessible. I connected via SSH and have the following situation:

~ # fdisk -l

***
*** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil
***

fdisk: device has more than 2^32 sectors, can't use all of them
Found valid GPT with protective MBR; using GPT

Disk /dev/disks/naa.60080e50002dde14000014ba578dc80c: 4294967295 sectors, 4095M
Logical sector size: 512
Disk identifier (GUID): d43013c9-8a3c-4d4f-aecd-006ddbf9cadc
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 6433770797

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      6433769471       6135M   0700
Found valid GPT with protective MBR; using GPT

Disk /dev/disks/naa.5000c50031c836d7: 286749488 sectors,  273M
Logical sector size: 512
Disk identifier (GUID): f8421fe0-f3c1-4b01-9e0f-88ceb67328e2
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 286749454

Number  Start (sector)    End (sector)  Size       Code  Name
   1              64            8191        8128   0700
   2         7086080        15472639       8190K   0700
   3        15472640       286749454        258M   0700
   5            8224          520191        499K   0700
   6          520224         1032191        499K   0700
   7         1032224         1257471        219K   0700
   8         1257504         1843199        571K   0700
   9         1843200         7086079       5120K   0700

Partition 3 is the one with the local datastore. Then:

/vmfs/volumes # ls -ls
total 3072
   256 drwxr-xr-x    1 root     root             8 Jan  1  1970 3351387b-8b14702c-16a3-d6d9681a9b23
  1024 drwxr-xr-t    1 root     root          2660 May  3 13:14 578e4303-23b0c8fb-3aa7-e41f13902454
   256 drwxr-xr-x    1 root     root             8 Jan  1  1970 57a0b717-390823ca-3cf1-e41f139025d6
  1024 drwxr-xr-t    1 root     root          1680 May  3 17:35 57a0b71d-1807b75c-88fd-e41f139025d6
   256 drwxr-xr-x    1 root     root             8 Jan  1  1970 57a0b722-c25d88a1-a2d5-e41f139025d6
     0 lrwxr-xr-x    1 root     root            35 May 10 12:42 ESXiDS2 -> 578e4303-23b0c8fb-3aa7-e41f13902454
   256 drwxr-xr-x    1 root     root             8 Jan  1  1970 c498e870-66f7053a-3fba-17a814b33860
     0 lrwxr-xr-x    1 root     root            35 May 10 12:42 datastore1 -> 57a0b71d-1807b75c-88fd-e41f139025d6

datastore1 is the local datastore, which is accessible as a folder and inside has the virtual machines. However,

/vmfs/volumes # voma -m vmfs -f check -d /vmfs/devices/disks/naa.5000c50031c836d7:3
Checking if device is actively used by other hosts
Running VMFS Checker version 1.0 in check mode
Initializing LVM metadata, Basic Checks will be done
Phase 1: Checking VMFS header and resource files
   Detected VMFS file system (labeled:'datastore1') with UUID:57a0b71d-1807b75c-88fd-e41f139025d6, Version 5:60
         ERROR: IO failed: Input/output error
 ON-DISK ERROR: Corruption too severe in resource file [FB]
         ERROR: Failed to check fbb.sf.
   VOMA failed to check device : IO error

Total Errors Found:           1
   Kindly Consult VMware Support for further assistance.

Is there a way to recover fbb.sf and make the datastore consistent and working again? Or at least, is there a way for retrieving the VMs from the corrupted datastore?

kuma
  • 158
  • 9
  • restore from backups? – mdpc May 10 '17 at 21:28
  • I had no backup for these VMs. I was wondering if they can be retrieved from the datastore1 folder, which is accessible and probably not corrupted. – kuma May 11 '17 at 15:46

0 Answers0