5

I've 2 node cluster where I enabled S2D. Later I added the 3rd node. Now I would like to check if resiliency mode changed to three-way mirror automatically. If not, I would like to change it manually.

The only script known to me doesn't tell me anything. Running Get-ResiliencySetting without any parameter doesn't show 3-way mirror option at all.

Get-StoragePool -FriendlyName S2* | Get-ResiliencySetting

Name   NumberOfDataCopies PhysicalDiskRedundancy NumberOfColumns Interleave NumberOfGroups
----   ------------------ ---------------------- --------------- ---------- --------------
Simple 1                  0                      Auto            262144     1
Mirror 2                  1                      Auto            262144     1
Parity 1                  1                      Auto            262144     Auto

Do I have to recreate the cluster to enable it?

Jan Zahradník
  • 547
  • 6
  • 14

2 Answers2

6

1) There are no automatic rebuilds with S2D, every time you add new node you have to re-create the pools with the newer resiliency option and migrate your data.

2) S2D has horrible resilience with anything below 4 nodes Microsoft initially were planning to put into production. 3 is waaaay better than 2 but still can't match any other mature SDS on the market: it can't survive double faults.

EDIT: That's true for 3-node configuration OP was asking about, 4 and more S2D nodes with 3-way mirror can survive double faults.

BaronSamedi1958
  • 13,676
  • 1
  • 21
  • 53
  • Can you please share a link, why 3 way mirror fails with double failure? – Jan Zahradník Dec 12 '17 at 16:29
  • @JanZahradník In S2D 3-way mirror does not automatically mean that your data is distributed equally over 3 hosts which means there is always a chance to lose the data if two servers go down. – Net Runner Dec 13 '17 at 09:07
  • 1
    @JanZahradník This isn't what I told really... 3-way mirror *can* survive double failure in ( x >= 4 ) node configuration, but not within your original question context, not with the 3 nodes. See you'll have all the data in-place, but cluster will lose majority of votes and single node won't stay operational. You can bring in extra external witness to build 3+1 config but it's dangerous: it works with 2 nodes going down one by one but with 2 nodes failing @ the same time you'll have possible brain split: 2 and 1+1 "siblings" will have exactly the same amount of votes. – BaronSamedi1958 Dec 13 '17 at 12:22
  • 1
    @JanZahradník ...best thing you could do is to use 4 node cluster + external witness (SMB3 share or Azure one) to have 4+1 bullet proof configuration. With 3-way mirror or erasure coding (you can do multi-resilient disks as just erasure coding is SLOW) this config will survive double failures and won't depend on the sequence of node disconnects. – BaronSamedi1958 Dec 13 '17 at 12:24
  • 1
    @JanZahradník Good document is here: (https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance. *Three-way mirroring can safely tolerate at least two hardware problems (drive or server) at a time. For example, if you're rebooting one server when suddenly another drive or server fails, all data remains safe and continuously accessible.* (just make sure you realize it's for 4-node configuration sampled, not for 3-node one) – BaronSamedi1958 Dec 13 '17 at 12:25
6

In order to change the resiliency settings of your existing volumes you have to recreate the volumes/pools with new resiliency setting and migrate the data. S2D only re-balances the data between newly added hosts keeping the resiliency setting intact.

S2D does not have LRC within physical hosts which means you are basically running on top of some kind of RAID0 in each host. That is applicable to both 2-node and 3-node S2D clusters.

For smaller deployments (especially 2-node clusters) I would recommend using alternative solutions like HPE StorVirtual https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=VSA1TB-S or StarWind VSAN https://www.starwindsoftware.com/starwind-virtual-san-free which do exactly the same what S2D does but are capable of working on top of internal hardware RAID array keeping data locality and consistency within each host. This approach is called Grid Architecture https://www.starwindsoftware.com/grid-architecture-page and is way more beneficial for small-sized clusters.

Net Runner
  • 6,169
  • 12
  • 34
  • 1
    Many thanks for pointing me to HPE StoreOnce. I didn't know about the 1TB free license. – Jan Zahradník Dec 14 '17 at 07:38
  • It should be HPE VSA Store*Virtual*, not Store*Once*. StoreVirtual is Software-Defined Storage stack from HPE with primary storage in mind, while StoreOnce is deduplication virtual appliance for backup purpose. Just my $0.02 :) – BaronSamedi1958 Dec 14 '17 at 10:59