5

We would like to build two-node File Server Cluster with Shared Storage on VM. The issues is that our vmware set up doesn't support disk sharing. File server cluster must be available all the time of course.

What other options do we have? I consider:

  • DFS - not a solution as both nodes can write at the same time in case of failover (known as a split brain issue)

- Storage replica (not sure here but sounds like an option?) - don't think it can switch automatically once one site goes down

- Storage Space (it requires shared storage anyway?)

I could use Storage outside of vmware (passthrough disk storage attached directly to a virtual server directly). It means the separate LUN must be created only for those 2 servers in cluster, so it sound like a lot of work.

suspense
  • 63
  • 3
  • 1
    Welcome to Server Fault! Your question appears to be broad, its answer would be primarily opinion based, and you are looking for a product recommendation. The StackExchange Q&A sites are intended for providing specific answers to specific problems. Please read [How do I ask a good question?](http://serverfault.com/help/how-to-ask) and consider revising your question, deleting your question or asking more than one question. And don't forget to take the [site tour](http://serverfault.com/tour). – Paul Dec 29 '21 at 13:08
  • If your vmware setup is already clustered, then why do you need to cluster the guests? (I'm not saying that there aren't valid reasons to cluster guests inside a clustered VM host, but has that been considered here?) – longneck Dec 29 '21 at 15:16
  • 1
    DFS-R should be avoided in production due to the lack of the transparent failover, no reasonable way to resolve brain split scenarios and extra reads on source splitting IOPS in half. – BaronSamedi1958 Dec 31 '21 at 18:36

3 Answers3

4

You can build nested failover cluster using shared iSCSI or FC storage as a shared storage. The following KB covers requirements. https://kb.vmware.com/s/article/2147661 If you do not have separate shared storage, you can use something like StarWind VSAN as a shared storage. It can create replicated shared storage, which can be used by the cluster via iSCSI. Check the guide: https://www.starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-windows-server-2016/

Stuka
  • 5,445
  • 14
  • 13
0

DFS - not a solution here, because of the split brain issues. Not funny.

Storage (native) replication - Valid Option when using native solutions like NetApp, or as a storage backend for Windows-Servers when using failover storage infrastrukture.

Storage Spaces - Works: Set up a (replicated) shared storage and multiple frontend Servers in the same cluster.

Windows File Server Cluster (WFSC) - Works: Set up a (replicated) shared storage (hardware (fc) or software (iSCSI)) and multiple frontends.

In all those (valid) cases you will need a shared storage for your frontends. If this should be done in hard- or software depends on your setup. Valid are, for example, fc/iSCSI targets on your SAN that can be mounted with multiple initiators (just like VMFS).

bjoster
  • 4,805
  • 5
  • 25
  • 33
-2

ceph.org

from https://docs.ceph.com/en/latest/install/windows-install/:

The Ceph client tools and libraries can be natively used on Windows. This avoids the need of having additional layers such as iSCSI gateways or SMB shares, drastically improving the performance.

SUPPORTED PLATFORMS:

Windows Server 2019 and Windows Server 2016 are supported. Previous Windows Server versions, including Windows client versions such as Windows 10, might work but haven’t been tested.

Windows Server 2016 does not provide Unix sockets, in which case some commands might be unavailable.

Sto
  • 9
  • 2
  • 2
    It’s a bad advice actually… Ceph needs at least THREE nodes to function (OP got only TWO) and FOUR nodes for a reasonably error-free operation (software security patching w/out putting whole cluster down for the maintenance, erasure coding for cost efficiency etc). – BaronSamedi1958 Dec 31 '21 at 18:32