14

I had a ZFS pool -- a mirror containing 2 vdevs -- running on a FreeBSD server. I now have only one of the disks from the mirror, and I am trying to recover files from it.

The ZFS data sits in a GPT partition on the disk.

When I try to import the pool, there's no sign that it exists at all. I have tried a number of approaches, but nothing happens.

I have run zdb -lu on the partition, and it seems to find the labels just fine.

# zpool import
# zpool import -D
# zpool status
no pools available
# zpool import -f ztmp
cannot import 'ztmp': no such pool available
# zpool import 16827460747202824739
cannot import '16827460747202824739': no such pool available

Partition information:

# gpart list da0
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 65536 (64K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   rawuuid: d7a10230-8b0e-11e1-b750-f46d04227f12
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 65536
   offset: 17408
   type: freebsd-boot
   index: 1
   end: 161
   start: 34
2. Name: da0p2
   Mediasize: 17179869184 (16G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 82944
   Mode: r0w0e0
   rawuuid: d7aa40b7-8b0e-11e1-b750-f46d04227f12
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 17179869184
   offset: 82944
   type: freebsd-swap
   index: 2
   end: 33554593
   start: 162
3. Name: da0p3
   Mediasize: 1905891737600 (1.7T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 82944
   Mode: r0w0e0
   rawuuid: d7b6a47e-8b0e-11e1-b750-f46d04227f12
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1905891737600
   offset: 17179952128
   type: freebsd-zfs
   index: 3
   end: 3755999393
   start: 33554594
Consumers:
1. Name: da0
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0

ZFS label:

--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'ztmp'
    state: 0
    txg: 0
    pool_guid: 16827460747202824739
    hostid: 740296715
    hostname: '#############'
    top_guid: 15350190479074972289
    guid: 3060075816835778669
    vdev_children: 1
    vdev_tree:
        type: 'mirror'
        id: 0
        guid: 15350190479074972289
        whole_disk: 0
        metaslab_array: 30
        metaslab_shift: 34
        ashift: 9
        asize: 1905887019008
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 3060075816835778669
            path: '/dev/gptid/d7b6a47e-8b0e-11e1-b750-f46d04227f12'
            phys_path: '/dev/gptid/d7b6a47e-8b0e-11e1-b750-f46d04227f12'
            whole_disk: 1
            DTL: 5511
            resilvering: 1
        children[1]:
            type: 'disk'
            id: 1
            guid: 3324029433529063540
            path: '/dev/gptid/396a2b11-cb16-11e1-83f4-f46d04227f12'
            phys_path: '/dev/gptid/396a2b11-cb16-11e1-83f4-f46d04227f12'
            whole_disk: 1
            DTL: 3543
            create_txg: 4
            resilvering: 1
    features_for_read:
    create_txg: 0
Uberblock[0]
    magic = 0000000000bab10c
    version = 5000
    txg = 0
    guid_sum = 1668268329223536005
    timestamp = 1361299185 UTC = Tue Feb 19 10:39:45 2013

(Other labels are exact copies)

There is a discussion of a similar-sounding problem in this old thread. I tried running Jeff Bonwick's labelfix tool (with updates from this post), but it did not seem to solve the problem.

Any ideas?

squidpickles
  • 791
  • 1
  • 8
  • 12
  • What happened before this? – ewwhite Mar 28 '14 at 20:00
  • 3
    The drive was detached from the mirror, rather than being split. It appears that was the cause of the problem. The rest of the mirror does not exist, unfortunately. – squidpickles Mar 28 '14 at 20:03
  • 1
    I don't know that this is the proper forum for this, because the 'answer' to the question involves a lot of trial & error. For now, try 'zpool import -d '. -D lists destroyed pools, -d takes an argument of the location of a disk to look at, and can be specified multiple times on the command line (but in your case, only once will be needed as you have but the one disk). See what that does. – Nex7 Mar 28 '14 at 20:23
  • 1
    You may be right about this not being the right forum. And yes, I've tried with the `-d` and `-D` options, to no avail. – squidpickles Mar 28 '14 at 20:44
  • 1
    If you tried with -d and it didn't show up, try everything again but on an illumos OS. If that still can't see it, I'm out of ideas. You may need to engage a data recovery expert if the data has monetary value, or start looking at the code (src.illumos.org) while on the illumos derivative and dtrace'ing the zpool import command to see what path it takes and try to figure out why it can't see your pool. – Nex7 Mar 30 '14 at 20:05
  • Yeah, same issue running OpenIndiana. As there are only a few files I really need, I may just try to recover them from the raw disk device. Thanks. – squidpickles Mar 31 '14 at 22:09
  • @slugchewer, if it's not too late, can you expand upon the commentary to answer your own question? Thanks. – Graham Perrin Jan 16 '16 at 04:07
  • 2
    @GrahamPerrin I did end up making it work. I edited the ZFS sources on my FreeBSD installation, and made them bypass all sanity checks. After disabling enough of those, I managed to get the pool imported. Someone must have bypassed my own sanity checks... – squidpickles Jan 16 '16 at 21:25
  • Just had the same problem, pulled my hair out for a few hours. Then realized I wasn't using sudo. –  Oct 25 '16 at 10:48

7 Answers7

15

for future reference, simply doing zpool import -a (will search for all), usually helps as well when a zpool/zfs fs isn't recognised.

Mal
  • 151
  • 1
  • 2
  • i even see mine in the cache, but trying this didnt work. I am backing up my cache file, trying with and without it, and to force it, etc.. also, going to check into what Graham suggested above. – Brian Thomas Feb 09 '17 at 01:39
5

From commentary (from the opening poster):

I edited the ZFS sources on my FreeBSD installation, and made them bypass all sanity checks. After disabling enough of those, I managed to get the pool imported.

Graham Perrin
  • 635
  • 2
  • 10
  • 24
2

I had same or very similar issue, this helped:

ls -l /dev/disk/by-id/

Then possibly the first partition of the drive is the one to mount. For example:

sudo zpool import -a -d /dev/disk/by-id/ata-Samsung_SSD_abc-part1 -d /dev/disk/by-id/ata-Samsung_SSD_def-part1

(-a switch may work, no need to remember pool name)

If issues, try "man zpool-import"

i do not know why sudo zpool import -a not find any pool and i have to define the disks

16851556
  • 436
  • 2
  • 7
  • 19
2

I somehow screwed up my ZFS configuration. Unfortunately I don't recall what exactly I did (I have been changing some hardware so I messed up; don't be like me!), but this worked for me. I'm using XigmaNAS (nas4free) and all commands below are issued via terminal.

Some vague memory about what I did (and did not do):

  • Did not export any pool
  • Might have deleted (destroyed) the pool

Symptoms:

  1. In the Web GUI, the disk can be automatically imported and recognized as a zpool (not unformatted or UFS etc.)
  2. However, the GUI ZFS section cannot detect the zpool. So I cannot import the pool by simply hitting the buttons. Force import did not work either.
  3. SMART info about this disk looks all right in GUI. I don't think the disk is physically damaged.
  4. In the GUI Information section shows the disk as da1. This is enough info I need before going to terminal.
  5. Warning to other users: If you run into problems with GUI, immediately stop any destructive operations. Such as creating a new vdev or trying with other disk formats. Go to terminal.
  6. In the terminal, here are some attempted commands and results.

    • zpool import -a says no pool available to import
    • zpool status says no pools available (broken language? lol.)
    • gpart list -a does not show da1
    • gpart list da1 says gpart: no such geom: da1
    • zpool list says no pools available
    • glabel list -a does not show any pool in da1
    • zdb -l /dev/da1 is able to print the two labels in da1, so my disk is not dead
    • zpool import -D says that the pool on da1 is destroyed, and may be able to imported

Solution:

Run zpool import -D -f (poolname) solved the issue.

Yvon
  • 121
  • 3
0

In my FreeNAS 11.2 system, a power cable failed and made 3 drives offline in an array. The system behaved oddly (ssh port was open, but ssh would not respond).

After forced power off and cable replacement, the system powered up but with no array. The array was not degraded - it was marked OFFLINE. I could not bring it online, or even import the array again, no matter what I tried.

Only after deleting the entry for the array in the FreeNAS Web UI, was I then able to import the array. It seems the system db was corrupted, and the corruption interfered with recognition of the array.

Colin
  • 135
  • 5
0

Solution:

Run zpool import -D -f (poolname) solved the issue.

Worked for Me. I created pool on Ubuntu 18.04 LTS & reclaimed it on Ubuntu 20.04 LTS.

0

Posted my analysis also on Trunas Jira

Mellmans comment saved my life : [SOLVED - GPT Corrupt or invalid | TrueNAS Community|https://www.truenas.com/community/threads/gpt-corrupt-or-invalid.81180/]

Symptoms as by the OP

  • zpool disappeared
  • all disks seemed present
  • dmesg output shows things like : GEOM_MULTIPATH: disk2 created GEOM_MULTIPATH: da1 added to disk2 GEOM_MULTIPATH: da1 is now active path in disk2 GEOM: multipath/disk2: corrupt or invalid GPT detected. GEOM: multipath/disk2: GPT rejected – may not be recoverable.

WARNING/DISCLAIMER : As I had a multi mirror system, I initially removed 2 out of 3 to limit my risk and to avoid doing disk clones before starting recovery. I deleted my external back two days ago, so this was an all or nothing situation.

REMINDER: Mirror is not backup. AlLways try to make a backup to a NON connected device (USB drive a other storage server)

commands used for diagnose and repair

zpool import -D

  • no pools available to import

sysctl kern.disks

  • kern.disks: da3 da2 da1 da0 ada0 (–> all disk I expected where here)

zdb -l /dev/da3p2 (LABELS seemed to be OK )

LABEL 0

version: 5000 name: 'akira_p01' state: 0 txg: 29173828 pool_guid: 5424572115530985634 errata: 0 hostid: 4193025452 hostname: '' top_guid: 1827022710836996365 guid: 7863535274456000245 vdev_children: 2 vdev_tree: type: 'mirror' id: 1 guid: 1827022710836996365 metaslab_array: 35 metaslab_shift: 34 ashift: 12 asize: 2998440558592 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 18165764508876870888 path: '/dev/gptid/66eb141c-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4252 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 7863535274456000245 path: '/dev/gptid/696b5de4-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4251 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data

LABEL 1

version: 5000 name: 'akira_p01' state: 0 txg: 29173828 pool_guid: 5424572115530985634 errata: 0 hostid: 4193025452 hostname: '' top_guid: 1827022710836996365 guid: 7863535274456000245 vdev_children: 2 vdev_tree: type: 'mirror' id: 1 guid: 1827022710836996365 metaslab_array: 35 metaslab_shift: 34 ashift: 12 asize: 2998440558592 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 18165764508876870888 path: '/dev/gptid/66eb141c-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4252 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 7863535274456000245 path: '/dev/gptid/696b5de4-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4251 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data

LABEL 2

version: 5000 name: 'akira_p01' state: 0 txg: 29173828 pool_guid: 5424572115530985634 errata: 0 hostid: 4193025452 hostname: '' top_guid: 1827022710836996365 guid: 7863535274456000245 vdev_children: 2 vdev_tree: type: 'mirror' id: 1 guid: 1827022710836996365 metaslab_array: 35 metaslab_shift: 34 ashift: 12 asize: 2998440558592 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 18165764508876870888 path: '/dev/gptid/66eb141c-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4252 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 7863535274456000245 path: '/dev/gptid/696b5de4-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4251 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data

LABEL 3

version: 5000 name: 'akira_p01' state: 0 txg: 29173828 pool_guid: 5424572115530985634 errata: 0 hostid: 4193025452 hostname: '' top_guid: 1827022710836996365 guid: 7863535274456000245 vdev_children: 2 vdev_tree: type: 'mirror' id: 1 guid: 1827022710836996365 metaslab_array: 35 metaslab_shift: 34 ashift: 12 asize: 2998440558592 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 18165764508876870888 path: '/dev/gptid/66eb141c-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4252 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 7863535274456000245 path: '/dev/gptid/696b5de4-b09d-11e5-9fe8-00188b1dee45' whole_disk: 1 DTL: 4251 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data

gmultipath list (STRANGE - should have been empty, I never created these)

  • Gives a multipath for each disk that was in my pool. WHY ---> Mellmans comment saved this

Geom name: disk1 Type: AUTOMATIC Mode: Active/Passive UUID: d68bb1af-3acc-11eb-9dd0-00012e7acada State: DEGRADED Providers:

  1. Name: multipath/disk1 Mediasize: 3000592981504 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r0w0e0 State: DEGRADED Consumers:
  2. Name: da0 Mediasize: 3000592982016 (2.7T) Sectorsize: 512 Stripesize: 4096 Stripeoffset: 0 Mode: r1w1e1 State: ACTIVE

gmultipath destroy disk1 (I was very scared, but this was not needed as I was sure NOT to have multipaths) - Do this command for each disk in your pool (after label verif!).

zpool import -f

Pool wad FOUND, but because I had the 2 other disks from the MIRROR physicaly removed, the zpool wasn't yet able to start.

pool: akira_p01 id: 5424572115530985634 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://illumos.org/msg/ZFS-8000-3C config:

akira_p01 UNAVAIL insufficient replicas 16830735507974382924 UNAVAIL cannot open mirror-1 DEGRADED 18165764508876870888 UNAVAIL cannot open gptid/696b5de4-b09d-11e5-9fe8-00188b1dee45 ONLINE

repeated gmultipath destroy for the two other disks

ran zpool import -f again — YES YES YES - data back

zpool status pool: akira_p01 state: ONLINE scan: scrub repaired 0B in 1 days 12:00:06 with 0 errors on Mon Dec 7 13:01:07 2020 config:

NAME STATE READ WRITE CKSUM akira_p01 ONLINE 0 0 0 gptid/64cb349c-b09d-11e5-9fe8-00188b1dee45 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/66eb141c-b09d-11e5-9fe8-00188b1dee45 ONLINE 0 0 0 gptid/696b5de4-b09d-11e5-9fe8-00188b1dee45 ONLINE 0 0 0

errors: No known data errors

Jan K
  • 1