2

I'm building Proxmox VE on Dell R820, as a Perc H710 card does not support path-through or JBOD modes.

I've made a division to enable RAID-0 on 14x 1.0TB SAS drives for zfs storage and RAID-1 of 2x 320.0GB SSD for system over lvm.

Official documentation of OpenZFS says that the best practice is to give zfs a full access to drives, slightly mentioned some issues of Hardware RAID-0.

Does anyone have experience of building Proxmox VE on Dell servers without support of path-through or JBOD and is there any way to give zfs full access to drives skipping hardware RAID at all?

Hellseher
  • 121
  • 3
  • 2
    Single disk RAID 0 is simply a workaround for the controller's lack of JBOD support, and for most practical purposes works just the same. – Michael Hampton Dec 16 '18 at 00:15
  • 1
    Why are you utilizing hardware RAID with software RAID [ZFS]? This should not be done and is covered in depth on the [FreeNAS forum](https://forums.freenas.org/index.php) – JW0914 Dec 21 '18 at 01:17
  • @JW0914 it's only the second week of test run, never dealed with `zfs`, would you please give me direct link to discussion thread? – Hellseher Dec 21 '18 at 16:40
  • @Hellseher [FreeNAS Forum Search Results](https://forums.freenas.org/index.php?search/12871/&q=hardware+raid&o=relevance) – JW0914 Dec 22 '18 at 13:26
  • @JW0914 thanks. The first 4 posts stated "do no use hw RAID" ok, I tried to find some detailed explanation for my case - using RAID-0 per drive. I can't avoid to use hw RAID card, there is only way to get HBA card which is out of budget. – Hellseher Dec 23 '18 at 19:08
  • @Hellseher You'll likely want to do some thorough research on the FreeNAS or FreeBSD forums to understand why it's a bad idea to combine hardware RAID with ZFS. It can obviously be done, but you'll likely want to understand the cons of that configuration. I also missed that you couldn't use JBOD, and I thought there was a search result on the first results page where someone else also couldn't use JBOD and had to combine hardware RAID with ZFS. – JW0914 Dec 23 '18 at 23:16

1 Answers1

2

I face the same problem at the moment. From what I could research there are differences:

  • RAID0 on H710 allows to use the battery backed write cache. This could be a speedup but you have a lot of logic between that sync() ZFS call and the real sync() on disk. So the controller can lie to ZFS. This is not exactly different from using LVM or some Hardware-RAID so I guess it's moot but depending on your requirements this can be a problem. Basically when a write fails on the path battery -> disk - ZFS doesn't know.

  • Looks like Hotswap is not possible - you have recreate these RAID0 drives for a changed disk and possible reboot everytime you change something :/

  • Something to test would be the actually disk layout - If you want to use the pool and disks on another machine maybe that H710 with RAID0 does some stupid stuff like writing metadata or a special partition scheme to the disk. I have no idea.

Besides that, I'll also set it up using raid0 virtual disks - let's see what happens and have backups ready :)

  • More weird stuff:

https://forum.proxmox.com/threads/proxmox-5-1-zfs-fresh-installation-unable-to-boot-due-to-grub.40984/#post-197732

your SCSI controller/BIOS/... only presents the first disk as bootable (as can be seen on the screenshot where you did 'ls' in the grub rescue shell). with just that one disk, grub is unable to read the data. you might be able to fix this by playing around with BIOS/controller settings, otherwise you need to work around it by using a different disk as boot disk and put /boot there (losing redundancy for booting).

kei1aeh5quahQu4U
  • 445
  • 5
  • 22