1

I just redid my NAS, switching from ZFS to LVM raid. This is a new raid with 3x8TB, each physical volume is a LUKS volume.

lvs -a -o+devices

shows me:

  LV                        VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                 
  [lvol0_pmspare]           vgraid ewi-------    2.00g                                                     /dev/mapper/cache_data(359138)                          
  lvraid                    vgraid rwi-aor---   14.55t                                    90.92            lvraid_rimage_0(0),lvraid_rimage_1(0),lvraid_rimage_2(0)
  lvraid_cache_data         vgraid Cwi---C---    1.37t                                                     lvraid_cache_data_cdata(0)                              
  [lvraid_cache_data_cdata] vgraid Cwi-------    1.37t                                                     /dev/mapper/cache_data(0)                               
  [lvraid_cache_data_cmeta] vgraid ewi-------    2.00g                                                     /dev/mapper/cache_meta(0)                               
  [lvraid_rimage_0]         vgraid Iwi-aor---   <7.28t                                                     /dev/mapper/sda1_crypt(1)                               
  [lvraid_rimage_1]         vgraid Iwi-aor---   <7.28t                                                     /dev/mapper/sdb1_crypt(1)                               
  [lvraid_rimage_2]         vgraid Iwi-aor---   <7.28t                                                     /dev/mapper/sdc1_crypt(1)                               
  [lvraid_rmeta_0]          vgraid ewi-aor---    4.00m                                                     /dev/mapper/sda1_crypt(0)                               
  [lvraid_rmeta_1]          vgraid ewi-aor---    4.00m                                                     /dev/mapper/sdb1_crypt(0)                               
  [lvraid_rmeta_2]          vgraid ewi-aor---    4.00m                                                     /dev/mapper/sdc1_crypt(0)                               

while restoring my backups, all kinds of alarms went off, specifically the load going through the roof. even if I stop everything that's running on the machine (it has a bunch of docker containers), it stays at ~5-7.

iotop tells me that the process with the most IO is dmcrypt_write ... so i am assuming that this is because the raid is syncing all blocks, and that it should be over when the "Cpy%ync" colums hits 100%.

Is this correct?

.rm

rmalchow
  • 176
  • 8

0 Answers0