9

On my freenas server, zpool status tells me I have 2 zfs pool : data & freenas-boot :

% zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0 in 0 days 04:16:16 with 0 errors on Mon Nov 20 00:59:24 2017
config:

        NAME                                            STATE     READ WRITE CKSUM
        data                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/3e08fdba-4564-11e7-bdef-00fd45fc38ec  ONLINE       0     0     0
            gptid/3eba62c2-4564-11e7-bdef-00fd45fc38ec  ONLINE       0     0     0
            gptid/3f704246-4564-11e7-bdef-00fd45fc38ec  ONLINE       0     0     0
            gptid/40249d11-4564-11e7-bdef-00fd45fc38ec  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:13 with 0 errors on Sun Mar  4 03:45:14 2018
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada0p2    ONLINE       0     0     0

errors: No known data errors

I would like to get statistics about my data zpool but zdb gives me an error :

% sudo zdb -b data
zdb: can't open 'data': No such file or directory

But it works on the freenas-boot pool :

% sudo zdb -b freenas-boot                                                                              

Traversing all blocks to verify nothing leaked ...

loading space map for vdev 0 of 1, metaslab 55 of 119 ...
2.56G completed ( 881MB/s) estimated time remaining: 0hr 00min 00sec        
        No leaks (block sum matches space maps exactly)

        bp count:          281124
        ganged count:           0
        bp logical:    5928553472      avg:  21088
        bp physical:   2636954624      avg:   9380     compression:   2.25
        bp allocated:  3376803840      avg:  12011     compression:   1.76
        bp deduped:             0    ref>1:      0   deduplication:   1.00
        SPA allocated: 3376803840     used:  2.64%
        Dittoed blocks on same vdev: 50961

What am I doing wrong ?

paulgreg
  • 4,154
  • 6
  • 33
  • 32

2 Answers2

7

For whatever reason, the cache path is different on FreeNAS. They haven't patched zdb to know where to look by default.

Add -U /data/zfs/zpool.cache <POOLNAME> to all uses of zdb to get it to work.

For your example, the command would be: zdb -U /data/zfs/zpool.cache -b data

ohmantics
  • 168
  • 1
  • 4
  • 2
    This solved my problem. Searching the freeNas bugs list I found [this bug](https://redmine.ixsystems.com/issues/14536) that was closed in 2016. – Brandon McClure Mar 30 '19 at 01:32
2

I had this problem on OmniOS where zdb couldn't open my rpool. The problem was caused by guid mismatch in zfs metadata versus actual guids of my disks. I guess this is a result of replacing broken hardware and shuffling disks between zpools...

The solution was to zfs detach one device of the mirror and zfs attach it back.