2

We are in the process of moving all servers (or virtualising physicals) into a central datacenter connected via MPLS.

Currently we are fully redundant on most aspects - RDS Farms, SQL Clusters, multiple DCs, hardware failover for firewalls etc. The cluster itself is a 4-node ESX Cluster.

However, file servers are a notorious pain to achieve some form of redundancy should the virtual machine crash or fail for whatever reason. Naturally if we were using Hyper-V something like Scale-out File Server (SOFS) with Client Access Points (CAPS) would work, but VMWare leaves fewer options.

Is Distributed File System (DFS) really a robust enough method for file server redundancy and if so, what are peoples recommendations for its configuration? Or whatever strategies have people implemented to achieve this?

Mister_Tom
  • 446
  • 1
  • 10
  • 19
PnP
  • 1,684
  • 8
  • 39
  • 65
  • 5
    `Is DFS really a robust enough method for file server redundancy` - Yes it is. – joeqwerty Sep 22 '15 at 18:16
  • What's wrong with using cluster services using a multi-writer-bit-enabled VMDK and then share that out with SMB 3.0 on Server 2016/19 - this will allow for full node failure and client recovery by W10 users. – Chopper3 Jan 15 '19 at 15:46

1 Answers1

2

DFS answer you.

You can create advanced scenario with DFS and clustering too. Like seen there

Configuring high availability for the DFS Replication service

In this section, we take a look at the steps required for configuring a highly available file server on this newly created failover cluster. As a result of these steps, the DFS Replication service also gets configured automatically for high availability. Thereafter, this failover cluster can be added to a DFS replication group.

Kinda a diagramme of how it's in the final if you follow their steps.

enter image description here

yagmoth555
  • 16,758
  • 4
  • 29
  • 50