7

I have read the Drive Symmetry Considerations and the Deploy Storage Spaces Direct articles and I'm lookup to understand the default behavior of the Enable-ClusterStorageSpacesDirect command as it relates to capacity. According to the articles, the slower drives are configured to enable the largest amount of storage while still maintaining resiliency.

I'm looking at setting up a very small environment to experiment with this, and was wondering how much storage I will end up with? I have 2 servers and plan on configuring a witness. The two servers are identical. Each has 2 500GB SSDs and 3 HDDs: a 1TB, a 2TB, and a 3TB.

Based on what I have read, I understand that the pair of 500GB SSDs will be configured for cache. Where I get lost is what will happen to the HDD capacity. According to the above articles, different drive sizes across servers is supported, but may result in lost capacity, and different drive sizes within a server is supported. However, it makes no mention of stranded capacity.

Assuming I perform no extra configuration, would this setup result in 6TB of usable storage, 3TB (1TB for each drive), or something else entirely?

2 Answers2

8

1) There are many ways to skin a cat (c) ... Depending on how you'll chop your disks and how you'll configure system (HDDs + SSDs for cache Vs. mirror accelerated parity etc) you'll get different usable capacity.

2) It's a bad idea to use S2D on only two nodes. It's fragile and every time you'll have one host down for whatever reason (maintenance, unplanned downtime etc) and you'll have just ONE disks failure in the alive one - your cluster will went South.

BaronSamedi1958
  • 13,676
  • 1
  • 21
  • 53
  • 1) yes, I understand there are multiple options - I’m trying to understand default behavior. According to docs, my SSDs will be used as cache - but how will it configure the HDDs? – Curious Blueprints Sep 17 '18 at 15:37
  • 1
    2) Absolutely. This is not a production - level environment, I’m just looking to try this out. Additionally, I plan on having a witness which will allow for one node to fail. After that it’s 50/50 that the cluster would survive another node going offline, but again, this isn’t an environment where that level of risk is unacceptable. – Curious Blueprints Sep 17 '18 at 15:40
  • 2
    (2) Your statement is wrong. Witness won’t help with disk failure in the second node. It’s just a (very questionable) way to fight brain split issue. – BaronSamedi1958 Sep 17 '18 at 16:46
  • 3
    (1) You’ll get 1/2 of your raw capacity because of 2-way mirror. Similar case discussed before: https://social.technet.microsoft.com/Forums/en-US/23595fda-37bb-450f-90ec-25907eaafba4/how-to-use-manual-configuration-cluster-s2d-of-disk-mode-please-help-thanks?forum=ws2016 – BaronSamedi1958 Sep 17 '18 at 16:47
  • As for 2) I understand that a 2 server + 1 witness system can't survive a failure of two nodes, but it can survive one node failure without split brain. This is an acceptable level of risk for the environment - if the cluster goes South, nothing meaningful will be lost that isn't already backed up. – Curious Blueprints Sep 17 '18 at 17:34
7

The considered setup will result in 6 TB of usable capacity and 1 TB of cache per node.

You can always check it with this calculator:

https://s2dcalc.blob.core.windows.net/www/index.html

I would also never trust 2 node s2d to keep mission-critical data. Unfortunately, such a setup is still from perfect and seems not thorougly tested. For the similar case, I tend to recommend a third-party Starwind free:

https://www.starwindsoftware.com/starwind-virtual-san

Hope to see s2d beeing redesigned in the future release.

batistuta09
  • 8,981
  • 10
  • 23