0

Option1: Linux Partition -> LUKS -> LVM

Option2: Linux Partition -> LVM -> LUKS

Which stack is better from system load aspects?

Background: On my Linux server (CPU AMD Opteron 6338P, LSI 3ware SAS / SATA-RAID controller with two 4TB HD in RAID 1, 64 GB RAM), on which several LXC containers are running, there are unfortunately dependencies of the I/O latency between these containers. I would like to reduce this unwanted influence with an optimal system layout as much as possible. Because I have a "dedicated server" from a hosting provider I can not change/optimize the hardware configuration.

questor
  • 1
  • 2

1 Answers1

0

Although your server seems not to provide room for hardware optimization, I would choose Option2 (Linux Partition => LVM => LUKS) instead of Option1 (Linux Partition -> LUKS -> LVM) for the sake of flexibility. For example:

  • In Option2, you will be able to bind one or more LV's to a container and set passphrases, keyfiles and other LUKS parameters such as cipher and device type that will be unique to the container. In Option1, you will also be able to bind one or more LV's to a container, however all LV's will inevitably share the same parameters and, for example, you won't be able to set a stronger cipher on one container and a weaker cipher on another.

  • In Option2, you will be able to create non-encrypted LV's aside encrypted ones if necessary due to performance reasons. In Option1, you won't be able to do so without creating a new volume group using different PV's.

  • In Option2, a corruption of a LUKS header will cause a loss of one logical volume only. In Option1, a corruption of a LUKS header will cause a loss of the entire volume group.

  • In Option2, if you could attach solid-state disks to the server, it would be easier to set up lvmcache (7) on the existing volume group and you could select individual LV's to have cache.

My guess is that both options will show similar performance.

  • "My guess is that both options will show similar performance." depends on where the bottleneck is in the stack. The HDs are the slowest part, but we have buffers in RAM. Encryption seems to be the second bottleneck. And if there is only one large LUKS container, are all LXC container on the server affected equally? – questor Mar 05 '20 at 13:44
  • I forgot to mention that I am currently using Option 2, a mixture of encrypted and unencrypted LVs. I don't need different levels of encryption. "In Option2, a corruption of a LUKS header will cause a loss of one logical volume only. In Option1, a corruption of a LUKS header will cause a loss of the entire volume group." That is exactly what I fear! – questor Mar 05 '20 at 17:06
  • "depends on where the bottleneck is in the stack" - From your server description, I imagined that the bottleneck would be in the HD's. I don't think that cryptographic processing will cause the bottleneck to be moved to server's CPU, because it uses optimized CPU instruction sets nowadays. RAM buffers will help if your containers performs many reads and few writes. Or alternatively, if you change cache policies from _write through_ to _write back_. – Anderson Medeiros Gomes Mar 05 '20 at 20:49
  • "And if there is only one large LUKS container, are all LXC container on the server affected equally?" I guess so, provided that few CPU-bound containers are running and competing against LUKS for CPU cycles. I think that, for example, when an application writes raw data on a block in the middle of a LUKS device, such data is encrypted in transit by the CPU and, afterwards, the resulting payload is written to a block in the middle of the underlying device too. At the end, I think that I/O queue will be handled by the scheduler governing the disk partition. – Anderson Medeiros Gomes Mar 05 '20 at 20:51