4

While preparing to move a zfs pool to another server chassis I did an 'zfs export' while in multi-user mode. Hindsight I should have have done this from a rescue disk.

After the export, which failed, I rebooted and have never been able to import the pool since. All of the disk and the pool are in an online state. It's more like the problem is a software issue with zfs - possible a meta data issue. Has anyone else seen this before or does anyone have any suggestions for recovering data from a possibly corrupted pool? Outside of the I/O error I've not been able to get any good feedback as to what's causing the I/O error. I've tried using truss with the import to see what's going on.

truss output: http://pastebin.com/DSDpuR1i

gpart list output: http://pastebin.com/Wxgr2PMx

I set this up using FreeBSD9 and I believe this is zfs v28.

As a side note I know I should have had backups. The reason I did not was more about money than anything else. The plan was to move this pool to a new norco chassis and add in an equal number of disks for another pool to mirror to.

root@nas01:~ # zpool import
   pool: rpool
     id: 15664112288097167104
  state: ONLINE
 status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

        rpool                  ONLINE
          raidz1-0             ONLINE
            diskid/DISK-%20p3  ONLINE
            da1p3              ONLINE
            da2p3              ONLINE
            da3p3              ONLINE
            da5p3              ONLINE
            da4p3              ONLINE
root@nas01:~ # zpool import -f -o altroot=/mnt rpool
cannot import 'rpool': I/O error
        Destroy and re-create the pool from
        a backup source.
root@nas01:~ #

edit - when I try with -nfF or -fF

root@nas01:~ # zpool import -nfF -o altroot=/mnt rpool
root@nas01:~ # echo $?
1
root@nas01:~ # zpool import -fF rpool
cannot import 'rpool': I/O error
        Destroy and re-create the pool from
        a backup source.
root@nas01:~ #
Glen
  • 414
  • 2
  • 8
  • Is there a way to have truss output what is actually being opened? `openat(0x6,0x802c390d0,0x0,0x0,0x7fffffffb830,0x802c00c78) = 7 (0x7)` isn't too helpful. – Mark Wagner Sep 24 '14 at 00:40
  • Not that I know off. I'm more of a linux guy so the truss output has been more of a mystery to me than if this had been strace output. Basically I ran truss while trying to import the pool to try and get a better idea what the I/O error actually means. – Glen Sep 24 '14 at 08:49
  • Sounds lilke the issue I had in OpenZFS 0.75 https://serverfault.com/a/1004188/52734 – Louis Waweru Feb 22 '20 at 08:33

1 Answers1

2

try with:

 zpool import -nfF rpool

if you don't see any critical error, try to use the previous command without n option

 zpool import -fF rpool
c4f4t0r
  • 5,301
  • 3
  • 31
  • 42