3

I recently acquired a Supermicro SYS-1029P-N32R with 16 14TB NVMe SSD's (Micron 9300 Pro) at work. We have 100 gigabit networking on the box to our VMWare hosts. We've tried using FreeNAS to host ZFS zvol's as iSCSI LUNs, but were unimpressed with the results - and FreeBSD/FreeNAS isn't officially supported on the box. We have seen some weird issues with networking, specifically.

Does anyone have thoughts on how to get the best performance out of this machine as a host for VMWare hosts? All hosts connecting to it are ESXi 7.0.

  • What about RAID 10 XFS on CentOS/RHEL 8 served over NFS? What settings/stripe sizes would be best for this?

  • Ceph on CentOS/RHEL 8 with one OSD per disk?

  • Windows Server as an iSCSI host?

We don't have access to vSAN, so we need either a Linux or Windows solution.

  • Hi, In your host OS what esx adaptor you use ? paravirtual ? as it’s optimised a lot over the lsi logic’s one, but might be harder to setup your VM. – yagmoth555 Jul 11 '20 at 19:40
  • You should hire a ZFS consultant to optimize the system and build it in a robust manner... ;) – ewwhite Jul 11 '20 at 20:10
  • 1
    (also, details and actual metrics on your requirements and what "unimpressed" means) – ewwhite Jul 11 '20 at 20:15
  • 2
    Another great thing to try is NVMeof. VMware added support for it in vSphere 7.0. https://storagehub.vmware.com/t/vsphere-7-core-storage/nvmeof/ It shows great results on other systems. I tested it on Linux to Linux setups. Example: https://www.hyper-v.io/nvme-part-1-linux-nvme-initiator-linux-spdk-nvmf-target/ – Stuka Jul 18 '20 at 22:47

2 Answers2

1

Ceph is out in your case due to the need to have reasonably 4 nodes as fault domains. With only one node and a bunch of OSDs you’ll end up with painful rebuilds after planned or unplanned host downtime. Windows is out due to the fact it’s iSCSI target NOT certified for ESXi / vSphere storage and in general it’s quite low performance solution. Single Ubuntu box with ZFS and LIO exposing some iSCSI LUNs should do the trick.

BaronSamedi1958
  • 13,676
  • 1
  • 21
  • 53
-1

Some suggestions, but you'll have to test for the best answer/solution.

I'd suggest you to use a supported Linux distro, Redhat/Centos or Debian/Ubuntu server and try both iSCSI target and NFS server to check which gives best results. Bonnie++ is a decent tool for benchmarking. Regarding filesystem, XFS or ext4, I don't think you'll see many differences, but you could also make some tests. There's also F2FS, flash-friendly, but I wouldn't trust it for a production server yet.

I suppose you mean hardware RAID. I wouldn't go for software RAID if I had a hw raid controller.

Ceph can be used as block storage, but it's supposed to be distributed. On one machine, one storage server, what would be the point; it surely has some overhead, I don't think you'd get any better performance. Yet, if you have the time, why not, test this also.

Windows iSCSI target? I'm not totally against it. Yet I prefer a UNIX-like OS, because of the customization options. Windows don't give the same amount of flexibility.

How about a more exotic test? NexentaStor - it's an OS based on OpenSolaris, native ZFS, for storage services. It has a community edition also.

Share your results if you like!

Krackout
  • 1,575
  • 7
  • 20
  • 2
    Just couple cents. For Windows iSCSI, I would recommend using StarWind VSAN, it will show much better results comparing to MS iSCSI target from my experience. https://www.starwindsoftware.com/starwind-virtual-san As for other suggestion, I would agree good Linux distribution to export iSCSI to your hosts. I think it is worth for OP to understand his IOPS/latency requirements before building a solution. – Stuka Jul 12 '20 at 15:58
  • I'd rather avoid Nexenta as it's acquired by DDN to be killed eventually. https://nexenta.com/customer-and-partner-letter-nexenta-now-ddn-storage-company – NISMO1968 Aug 16 '20 at 16:03