3

We are building a system for the archival and scientific analysis of some weather data.

The setup is redundant, with two HP DL580, Proxmox (ZoL) and some GPUs for analysis. On each server we plan 5 pools of around 50 TB. We use SSDs for density reasons and read speed. We have been working with HPE read-intensive SSDs over the last two years. We are considering the following changes for the next archive pools:

  • Use HPE QLC "very read-optimized" SSDs. They come with reduced DWPD, especially for random writes.
  • Move from striped mirror to raidZ2 (8 x 7.68 TB)

Data is saved as files (25%) and in a database (InnoDB, 75%), obviously written only once.

Is the combination raidZ2 - QLC SSDs resonable for this type of archive ?

Are there ZFS-specific good practices or pitfalls regarding QLC SSD endurance ?

Edit: sample smartctl output for current TLC SSD in striped mirror

Copyright (*C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org  
=== START OF INFORMATION SECTION ===  
Device Model:     VK007680GWSXN  
Serial Number:      
LU WWN Device Id: 5 00a075 1266adce4  
Firmware Version: HPG2  
User Capacity:    7,681,501,126,656 bytes [7.68 TB]  
Sector Sizes:     512 bytes logical, 4096 bytes physical  
Rotation Rate:    Solid State Device  
Form Factor:      2.5 inches  
Device is:        Not in smartctl database [for details use: -P showall]  
ATA Version is:   ACS-3 T13/2161-D revision 5  
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)  
Local Time is:    Mon Sep 21 21:11:42 2020 CEST  
SMART support is: Available - device has SMART capability.  
SMART support is: Enabled  
=== START OF READ SMART DATA SECTION ===  
SMART overall-health self-assessment test result: PASSED  
General SMART Values:  
Offline data collection status:  (0x00) Offline data collection activity
                    was never started.  
                    Auto Offline Data Collection: Disabled.  
Self-test execution status:      (   0) The previous self-test routine completed
                    without error or no self-test has ever 
                    been run.  
Total time to complete Offline   
data collection:        (26790) seconds.  
Offline data collection
capabilities:            (0x7b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                    General Purpose Logging supported.
Short self-test routine 
recommended polling time:    (   2) minutes.
Extended self-test routine
recommended polling time:    (  45) minutes.
Conveyance self-test routine
recommended polling time:    (   3) minutes.
SCT capabilities:          (0x0035) SCT Status supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   100   100   050    Pre-fail  Always       -       0  
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0  
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       4514  
 11 Unknown_SSD_Attribute   0x0012   100   100   000    Old_age   Always       -       5  
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       6  
171 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0  
172 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       0  
173 Unknown_Attribute       0x0033   100   100   010    Pre-fail  Always       -       26  
174 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       -       5  
175 Program_Fail_Count_Chip 0x0033   100   100   001    Pre-fail  Always       -       0  
180 Unused_Rsvd_Blk_Cnt_Tot 0x003b   100   100   001    Pre-fail  Always       -       0  
184 End-to-End_Error        0x0032   100   100   000    Old_age   Always       -       0  
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0  
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       7  
194 Temperature_Celsius     0x0022   067   057   000    Old_age   Always       -       33 (Min/Max 22/43)  
196 Reallocated_Event_Count 0x0033   100   100   001    Pre-fail  Always       -       0  
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0  
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0  
199 UDMA_CRC_Error_Count    0x003e   100   100   000    Old_age   Always       -       0  
SMART Error Log not supported  
SMART Self-test Log not supported  
SMART Selective self-test log data structure revision number 1  
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS  
    1        0        0  Not_testing  
    2        0        0  Not_testing  
    3        0        0  Not_testing  
    4        0        0  Not_testing  
    5        0        0  Not_testing  
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.*
Benoit
  • 121
  • 9
  • You say these are your "next" pools. What was your previous storage? What is the goal that led you to choose RAID-Z2 on QLC? – John Mahowald Sep 21 '20 at 15:42
  • During the first two years we used striped mirrors (ZFS Raid 10) with read intensive SSDs (TLC). We consider moving to RAIDZ2 to increase the total storage with the available number of slots. Regarding the newly-available QLC-drives: they are 33% cheaper and considering our iostats they seem to give enough endurance. However as a non-IT person I might be missing some issues (disk wear resulting from sync, type of write operations). – Benoit Sep 21 '20 at 16:33
  • Can you show the SMART status on one of your current TLC disks (ie: `smartctl --all /dev/sdX`)? – shodanshok Sep 21 '20 at 17:23
  • @Benoit thanks, but you missed many info in the comment. Please add the SMART output to the question itself, without trimming anything. – shodanshok Sep 21 '20 at 19:05
  • @Benoit the disk depicted by your SMART stats has only ~6 months lifetime. Is this correct? I understood from your previous comments that your current system is online since at least ~2 years. – shodanshok Sep 21 '20 at 19:19
  • @shodanshok last year workload included lots of experiments, therefore I did not put a disk from last year's pool. This year's 6 month are representative of the future use with operational measuring devices delivering data. The pools are going to fill up over one year and then be used for machine learning (only read). – Benoit Sep 21 '20 at 19:22

2 Answers2

4

We have implemented the solution. The QLC drives appear to be fine for the use that we make.

However RAIDZ2 showed to be non-practical:

The combination ashift=12 with 16K recordsize (appropriate recordsize for our DB) leads to a high price paid on parity.

Using RAIDZ2 we had two 4K parity blocks written for 16K actual data. One third of the storage was used for parity. We therefore moved back to striped mirrors.

Benoit
  • 121
  • 9
3

Due to how HP drives report their SMART info, the provided data are not tremendously useful. That said, attribute 173 should be the worst-case erase count (ie: wear) of NAND blocks. With only 26 max erase cycles after 6 months, your SSD should be good for 3000 / 26 / 2 = ~57.7 years.

This is clearly an exaggeration, as much before that you will need to replace something else in your server (or even the SSD itself due to unexpected controller/NAND failure). It is, however, a good starting point to evaluate QLC enterprise SSD: even with 1/10 the endurance, you will be in the ~5 years service time - the same as their warranty typically cover.

Moreover, enterprise QLC drives generally have NAND chips rated at ~1000 cycles, so real-world endurance should be significantly higher than the 5 years reported above.

Coupled with the fact that, as per your question & comments, these SSDs are going to spend most of their time in read-only workload, going with QLC drives should pose no issue at all, unless the slower write speed of QLC drives is of any significance for your workload or you plan to leave your server unpowered for extended periods of time.

Regarding RAIDZ2, it can be a good choice for SSD but be sure to create your pool with ashift=12 and to set a reasonably small recordsize property (I strongly suggest 16K rather than the default 128K value).

shodanshok
  • 47,711
  • 7
  • 111
  • 180
  • 1
    thanks, per policy our disks are replaced after 5 years so your arguments give me confidence to move on with the QLC drives. I had the ashift correct but I will need to read on optimal recordsize. – Benoit Sep 21 '20 at 20:06