2

We have a couple of Dell PowerEdge R630 running Hyper-V and just bought new hard disks to fill all the empty slots. I was wοndering what kind of Raid configuration should we proceed.

Currently each server has 2x 300GB SAS on Raid 1 configuration where the OS and very few, critical, VMs live in a single volume/virtual disk. The rest of the VMs are stored in SAN iSCSI devices.

Now each server has in total 8x 300GB SAS drives and I think we have the following options:

  1. Keep the Raid 1 Volume with the two drives for the OS and create a new Raid 6 Volume with the five drives for data. In this case i can keep one disk as a global hot spare (aka manual swapping in case of failure) for both raid volumes.
  2. Combine all seven disks in a single Raid 6 Volume (one of the disks will be kept as a hot spare)
    • keeping the OS in a separe dell virtual disk and creating a new one for data.
    • expanding the current dell virtual disk to full size, ~1.4TB. I could then have two partitions or just have one huge partition.

I think in the past the norma would be closer to my first option, having the OS in Raid 1 and the data in Raid 5 volumes.

Since the hot spare drive is automatically swapping only when assigned on a Raid Volume, i would preffer to convert all drives in a single raid.

Considering the fact that we use SAS drives, should I be warried of any performance issues having both the OS and VMs on a single raid volume?

I would be glad to hear your opinions and expiriences.

Until now all of our servers would only have two disks in Raid 1 hosting the Virtualization OS (Hyper-v or vSphere), few VMs and the rest were offloaded to SAN/NAS iSCSI devices where we use Raid 5/6 depending the controller availability. The plan is to keep using the SAN/NAS solutions in place but have the ability to store VMs locally too.

StashX
  • 21
  • 1

3 Answers3

3

I would combine all the drives in one RAID and as far as you wrote, you're going to run some critical VMs on this array. In such case, I would make something like RAID 10 to avoid any performance bottlenecks for your VM (especially in case of array rebuild, RAID 6 won't be sufficient). BTW, why don't you think about High Availability for the business critical VMs? It would be a more meaningful decision to replicate VMs across those two servers with something like Starwind or HPE and other VMs would leave on the SAN.

Stuka
  • 5,445
  • 14
  • 13
1

I would personally go with Scenario 1. I would kept the RAID 1 for the Hyper-V operating system and create a RAID 6 (or a raid 5 if you want even better performance, I believe with RAID 6 you will not have issues too except you have some very I/O intensive VMs) and use one disk for global hot spare.

Alexios Pappas
  • 505
  • 3
  • 9
  • 1
    Thank you Alexi, we'll consider going for Raid 5 instead of 6. Could you please elaborate more with your thoughts about keeping the OS on a separate RAID 1? – StashX Jul 17 '17 at 15:20
  • 1
    By keeping OS on a different RAID you get better fault tolerance (RAID1 + RAID 5 + GHS), especially on your situation if you put the OS on a combined RAID Volume you will have worst performance. Also by separating the Volumes in the future you can replace each volumes disks without needing to wiping both VMs Volume or and Hyperv OS Volume. – Alexios Pappas Jul 17 '17 at 17:52
-1

I would throw ou the garbage (ie. SAS discs) and go all SSD - did that on my end and now all any any IO problems for any standard use are past. Running multiple striped Raid 5's and happy with them.

Depending on your VM number (you say nothing outside of have discs and use Hyper-V) your performance at patch day and heavier use will simply be horrific without SSD and in a Raid 6 - no way around it. The combined IOPS can only be described with one word: pathetic. Seriously.

Or at least get some SSD for caching. Get some SSD (Raid 1), and use Storage Spaces Tiered to cobine a SSD Raid and a HDD Raid so you get SOME caching - it is QUITE limited, though, but then you can pin OS discs noto the SSD side.

The OS is not your problem - the VM's are. Multiple VM's multiply the IOPS requirement.

TomTom
  • 51,649
  • 7
  • 54
  • 136
  • 1
    Although i don't like the SAS disks myself, mainly due to their extreme cost, the hardware is already bought and can't do much about that. Each host has around 15-20 VMs but apart from very few (spread on multiple servers) DB, Fileserver and Exchange VMs, the rest don't have much IOPS needs.. Regarding my question though, didn't catch your opinion. Are you combining all server disks in single Raid 5 Volumes? – StashX Jul 17 '17 at 15:36