1

My Debian server is running ZFS on Linux. Today I had to reboot it two times due to software upgrades. A first reboot because of ZFS update from some 0.6.4-1.2-1-wheezy to 0.6.5.2-2-wheezy, which went fine as I afterwards accessed my home dir in the pool. After the last reboot, zpool fails to import the pool:

# zpool import
   pool: storage
     id: 4490463110120864267
  state: FAULTED
 status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://zfsonlinux.org/msg/ZFS-8000-72
 config:

        storage      FAULTED  corrupted data
        logs
          sda3       ONLINE
# zpool import storage
cannot import 'storage': I/O error
        Destroy and re-create the pool from
        a backup source.
# zpool import -F storage
cannot import 'storage': one or more devices is currently unavailable

I'm missing my complete pool, which should read like this:

    storage
      mirror
        scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T2132687-part1
        scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T2194187-part1

How can I tell zpool to use the data partitions? I'm afraid trying to attach the disks to this pool or a new pool because I fear the disks resp. their metadata will be cleared.

Edit/Update:

  • perhaps important: after re-reading both apt's history.log and wtmp, I'm not sure if I accessed my home dir after the first ZFS-update related reboot. Meanwhile I tried to go back to the former version but I can't find any ZFS packages except the most recent version.
  • I have two HDD for data (sdb, sdc), GPT-partitioned, and both main partitions had been set up as a mirror pool on ZFS, using /dev/disk/by-id. Device sda is an SSD with the Debian installation, some VM space and, in a separate partition /dev/sda3, the SLOG/ZIL. All disks are directly attached to the mainboard.
  • The zfsonlinx-URL above also suggests a "zpool clear -F storage" which replies "no pools available".
ChristianM
  • 21
  • 1
  • 4

2 Answers2

1

Given the fact, that the zpool loss took place during a regular reboot, I hoped that at least the zpool export took place. And even if it had been shut down unclean, I prefer to do rescue works on copies. So I added a large HDD to my system (which was added as device sdb - thanks udev) and formatted it with two partitions of the same size as the zfs partitions on the failed drives. Because this was a mirrored pool, I copied both old partitions to the new ones:

dd if=/dev/disk/by-id/scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T2132687-part1 /dev/sdb1 bs=104800
dd if=/dev/disk/by-id/scsi-SATA_WDC_WD30EFRX-68_WD-WMC1T2194187-part1 /dev/sdb2 bs=104800

Now I had a system with two identical mirrors.

# zpool import 
   pool: storage
     id: 4490463110120864267
  state: ONLINE
 status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
       some features will not be available without an explicit zpool upgrade'.
 config:

        storage                                             ONLINE
          mirror-0                                          ONLINE
            sdb1                                            ONLINE
            ata-WDC_WD30EFRX-68AX9N0_WD-WMC1T2194187-part1  ONLINE
        logs
          sda3                                              ONLINE
# zpool import storage
#

Hooray!

I meanwhile backup'd the data twice, and restoring to a complete new zpool is in progress.

ChristianM
  • 21
  • 1
  • 4
  • The lessen I've learned: there are only two kind of data: those which have a backup and those which are not lost, yet. – ChristianM Dec 28 '15 at 22:00
0

The pool may not be able to find your disks since you specified /dev/disk/by-id in your original pool creation. Note how your slog device is recognized...

There's a pool import flag, -d, which allows you to point the import process at a particular directory to query device. Good advice here.

Try:

zpool import -F -d /dev/disk/by-id storage
ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • This did not work (tried already earlier). I found in the zfs history that I added the slog device also by-id: 2013-03-09.13:22:02 zpool add storage log /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S12PNEAD130842J-part3 -f So I do not understand why the slog was shown as device sda3. Maybe because this pool was created with zfs version 0.61. We will most probably never find out why, as the pool is now completely anew. – ChristianM Dec 28 '15 at 12:11