4

I am looking to move our DFS-R file servers to SoFS to accommodate our User Profile Disks in Windows Server 2012 R2 remote desktop services.

DFS-R does not give us any fail over capability as it stands, so if we lose the primary file server, user's profiles will become dismounted.

I intend to create two nodes as VMs in VMware, and need a solution for the shared storage. Our VMware cluster has datastores on our SAN, so I am wondering if I can just use shared VMDKs at VMware level for this kind of scenario? If this is possible, can anyone point me in the direction of a good set of instructions on how to configure shared storage in this way?

Are there any dos/don'ts for shared storage in this scenario?

Has anyone created a SoFS setup on VMware before and have any advice that may be of use?

James Edmonds
  • 1,733
  • 10
  • 37
  • 59

2 Answers2

6

Even taking into account an upcoming "VHD Set" feature in Windows Server 2016, you would still need some shared storage to store VHDs. Try StarWind Virtual SAN or HP VSA, they both can present local disks on each server as a shared HA Datastore.

From my personal experience, configuring "guest SoFS" may be a pain in the neck in comparison to "hardware SoFS" due to the combination of fail-over algorithms of MS Failover Cluster and vSphere HA Cluster kicking-in at the same time. But it definitely CAN be deployed with some efforts.

Here is a nice guidance describing SoFS deployment on top of the StarWind storage: https://www.starwindsoftware.com/technical_papers/Hyper-V2012_dedicated_iSCSI.pdf

Strepsils
  • 5,000
  • 10
  • 14
  • 1
    I must admit, I thought of setting up a VM with VMDKs attached to it, stored in our existing VMware datastores on the SAN, setting up StarWind, and then attaching those to the SoFS node VMs using iSCSI. I suspect this would suffer in performance though, as if the nodes are on different hosts, the storage would be limited by the link speed of the NICs on the host. We could dedicate NICs to those VMs, would require hardware additions/changes and downtime :( – James Edmonds Aug 10 '16 at 09:21
0

According to VMware (I couldn't quite make sense of their documentation, so opened a support case directly), VMDKs in VMFS datastores are not supported, and we have to use RDMs.

Issue for us is no available storage to dedicate for RDMs, as it has already all been used for VMFS datastores.

Sounds like RDMs will allow us to achieve what we need though.

James Edmonds
  • 1,733
  • 10
  • 37
  • 59
  • 2
    Use some storage virtualization software like referenced Starwind to get the job done. RDM-attached VMs have issues with vMotion, and VM backups have hard times supporting them as well. My $0.02. – BaronSamedi1958 Aug 20 '16 at 04:07
  • Would we have to worry about bandwidth though? The current setup has our vms connected to our san using 8gb fibre cards. If we had a vm running starwind, we would be limited by our NIC configuration (teaming)? – James Edmonds Aug 20 '16 at 11:55
  • 2
    You don't do NIC teaming with iSCSI you do MPIO. If you run 10 GbE or 4*1GbE for storage only you'll have no issues. – BaronSamedi1958 Aug 21 '16 at 07:51
  • That's the problem, each of our hosts only has 3x 1gb NICs for VM traffic. We have no spare NICs to assign for storage only. Looks like there will need to be some hardware investment as well then. – James Edmonds Aug 21 '16 at 13:43