3

I am running Ubuntu 16.04 on ZFS.

I have my OS on rpool and my data in /tank

Problem: I have added 2 6TB drives to my zvol using the following command:

# zpool add -f tank mirror ${DISK1} ${DISK2}

The drives added. I was expecting to gain something near 6TB, but I got an additional 2TB. Here is the output of df -h /tank

Filesystem      Size  Used Avail Use% Mounted on
tank            2.1T     0  2.1T   0% /tank

and here is the output of # zpool list tank

NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank  2.57T   460G  2.12T         -     7%    17%  1.00x  ONLINE  -

Here is the output of # zpool status

pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Feb 12 00:24:58 2017
config:

NAME                                                     STATE     READ WRITE CKSUM
rpool                                                    ONLINE       0     0     0
  mirror-0                                               ONLINE       0     0     0
    ata-Samsung_SSD_850_EVO_250GB_S2R5NB0HA87070Z-part1  ONLINE       0     0     0
    ata-Samsung_SSD_850_EVO_250GB_S2R5NB0HB09374D-part1  ONLINE       0     0     0

errors: No known data errors

pool: tank
state: ONLINE
scan: scrub repaired 0 in 1h8m with 0 errors on Sun Feb 12 01:32:07 2017
config:

NAME                                             STATE     READ WRITE CKSUM
tank                                             ONLINE       0     0     0
  mirror-0                                       ONLINE       0     0     0
    wwn-0x50014ee0561bff3f-part1                 ONLINE       0     0     0
    wwn-0x50014ee1011a7ad7-part1                 ONLINE       0     0     0
  mirror-1                                       ONLINE       0     0     0
    ata-ST6000NE0021-2EN11C_ZA14Q289             ONLINE       0     0     0
    ata-ST6000NE0021-2EN11C_ZA13YT32             ONLINE       0     0     0
cache
  ata-Samsung_SSD_850_PRO_512GB_S39FNX0J102027A  ONLINE       0     0     0

errors: No known data errors

I tried # zpool set autoexpand=on tank but no joy. Still reporting 2.5TB.

Here is the output of # lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   477G  0 disk 
├─sda1   8:1    0   477G  0 part 
└─sda9   8:9    0     8M  0 part 
sdb      8:16   0     2T  0 disk 
├─sdb1   8:17   0     2T  0 part 
└─sdb9   8:25   0     8M  0 part 
sdc      8:32   0     2T  0 disk 
├─sdc1   8:33   0     2T  0 part 
└─sdc9   8:41   0     8M  0 part 
sdd      8:48   0 596.2G  0 disk 
└─sdd1   8:49   0 596.2G  0 part 
sde      8:64   0 596.2G  0 disk 
└─sde1   8:65   0 596.2G  0 part 
sdf      8:80   0 232.9G  0 disk 
├─sdf1   8:81   0 232.9G  0 part 
├─sdf2   8:82   0  1007K  0 part 
└─sdf9   8:89   0     8M  0 part 
sdg      8:96   0 232.9G  0 disk 
├─sdg1   8:97   0 232.9G  0 part 
├─sdg2   8:98   0  1007K  0 part 
└─sdg9   8:105  0     8M  0 part 
sr0     11:0    1  1024M  0 rom  
zd0    230:0    0     4G  0 disk [SWAP]

Key:

sda = L2ARC for tank (samsung pro)

sdb & sdc = Seagate Ironwolf 6TB drive (new mirror in tank)

sdd & sde = WD 596G drive in tank mirror

sdf & sdg = rpool mirror

Do you know why my machine is only seeing these new drives as 2TB?

Is there anything I can do about it?

Will I need to destroy my tank to fix the issue (if there is a fix)?

posop
  • 247
  • 2
  • 10
  • 1
    How are they physically connected? Are they connected through some kind of USB interface or something that only supports 2TB drives? What do you see if you run `lsblk` for the WD drives? Does that report 6TB? – Zoredache Feb 21 '17 at 23:39
  • 1
    `Will I need to destroy my tank to fix` - Hopefully not, since they are in a mirror, you should be able remove one member, fix that, re-add, then fix the other. Depending on what the real underlying problem is – Zoredache Feb 21 '17 at 23:42
  • They are connected through SATA, not usb. I ran `lsblk` and it looks like the system is reporting them as 2TB drives as well. – posop Feb 22 '17 at 00:02
  • That seems really unusual, I am almost tempted to think you are at a point that you need to figure out the SATA chipset and verify that it doesn't have problems with 2TB+ drives or something. Or maybe there is a limitation in your BIOS, or some other hardware component that you are running into. – Zoredache Feb 22 '17 at 00:05
  • I'm sorry... are you dealing with unequal-sized disks? – ewwhite Feb 22 '17 at 00:21
  • 1
    @ ewwhite: I have 2 mirrors in my zvol "tank". One is a 500GB mirror and one is a 2TB mirror. From what I read this is the appropriate way to expand storage in ZFS. If I'm missing something by all means let me know. – posop Feb 22 '17 at 00:27
  • @ewwhite if you match up sizes from the `lsblk` output sdd/sde seem to be mirror-0, sda seems to be the cache, sdb, sdc seem to be the 6TB drives incorrectly reporting their size. The model `ST6000NE0021` of both devices in the mirror-1 does seem to indicate that these should be 6TB drives, the fact that lsblk isn't seeing the correct size is the biggest problem in my mind, because it probably means the kernel is seeing the incorrect size. – Zoredache Feb 22 '17 at 01:36
  • This is not the right way to manage storage in ZFS. Your new drives should be their own pool. It does not make sense to mirror dissimilar drives or have unequally-sized disks in the same pool. – ewwhite Feb 22 '17 at 02:15

2 Answers2

1

Two things going on here.

  1. Your SATA controller likely doesn't support >2TB disks. You'll have to get a new controller in order to get the full capacity out of them.
  2. You've added a 2TB (6TB disks) mirror to a pool with a 596GB mirror vdev. While this technically will give you the added storage to the pool, this is a bad setup for performance. Consider the case where the pool is empty. Writes are spread out over the striped vdevs in order to increase performance. The 596GB vdev will fill up much quicker than the 2TB vdev, forcing ZFS to write almost exclusively to the 2TB vdev. This negates any performance gains you'd see from running striped mirrors.

You'll always want the same size drive (I believe ideally even the same geometry) in all vdevs in a pool for optimal performance.

Is there anything I can do about it?

You can't remove the vdevs now that they are added, but you can replace disks with bigger disks. If you want optimal performance here you can

  • a) Get a SATA controller that supports >2TB drives and 2 more 6TB drives to replace the 596GB drive.
  • b) Get 4 2TB drives and replace all 4 drives, and use the 6TB drives for something else.

Will I need to destroy my tank to fix the issue (if there is a fix)?

Not with any of the above solutions. If you want to remove one of the mirrors you'll have to recreate the pool.

Mikolan
  • 163
  • 1
  • 7
  • 1) You were spot on. I contacted Super Micro on my motherboard and they said the raid controller could not show disk size as bigger than 2TB. They said I have the option to purchase a new raid controller. – posop Feb 27 '17 at 23:48
  • 2) how does the community suggest expanding storage once you fill the first mirror? I have read several places that you just tack on a second mirror. I understand you don't get the stripe benefit if you go this way, but budgetary considerations prevent me from buying all hard drives I will ever need up front. Thank you. – posop Feb 27 '17 at 23:50
  • @posop I'm no expert on the internals of zfs, but I don't think ZFS will spread out your existing data over the new vdev. Thus adding a mirror will give you more storage, but your performance may be affected as the first mirror gets more and more filled up. Generally I think the recommendation is plan your layout for performance and redundancy from the get-go, and if you eventually require more storage, swap out all drives for larger ones. That said, if performance of the pool is not an issue, adding vdevs might be fine for you. – Mikolan Feb 28 '17 at 12:26
0

By looking at lsblk ouput, your devices are reported as 2 TB disks. This means that destroying and recreating the pool will have no effect of available space.

Do your SATA ports are configured in Legacy/IDE mode? If so, try putting them in AHCI mode.

Also, please post the output of dmesg | grep -i sdb

shodanshok
  • 47,711
  • 7
  • 111
  • 180