0

Current build is 2 Dell R900 24core each 128GB ram - 8x 144GB 15k 2.5 SAS on a Perc 6i local - 15x 300GB 15K 3.5 SAS in a Raid5 DG 6 VG in the MD3000 using 2x Perc 5e HBA - OS Windows 2012 R2 Datacenter

partition offset 1024 block size 64k

so disk I/O on the host is getting an effective I/O or 360MB/s but on the Windows VMs I'm getting an effective I/O of 35MB/s which is 10% of the host I/O and the host is getting 60% of the hardware capabilities.

Any ideas to increase disk I/O?

longneck
  • 23,082
  • 4
  • 52
  • 86
  • Hosts are running Windows 2012 R2 Datacenter with Hyper-V Role, I am thinking on pulling the local disks with a spare set and installing ESXi 5.5 to see if the I/O contention is a MS issue – Robert The Architect Apr 25 '14 at 14:17
  • Is the disk using refs or ntfs? Are the virtualization extensions installed and working in the virtual machines? Are the VMs 2012 as well? – Grant Apr 25 '14 at 14:18
  • The luns are presented to the hosts formated with NTFS, using virtual disks to the guests, guests are Windows 2012 standard which all the virtual extensions are installed by default oh and the virtual disks are thin provisioned – Robert The Architect Apr 25 '14 at 14:49
  • What are you using to test/measure the disk I/O? How is the RAID configured on the local disks? – Rex Apr 25 '14 at 14:58
  • By thin provisioned you mean dynamically expanding VHDs? They can have a pretty big performance impact sometimes. – Grant Apr 25 '14 at 15:06
  • I'm using Windows performance counters to measure the I/O along with a mixture of SQLIO tools and large file copies with robocopy, as for the array it's a single disk group raid 5 14 disk 1 spare, the luns are configured with large block 128 I think as the gui give you only 3 options database, files and media. I chose media as it equates to large block size. I know thin provisioning has a I/O impact but not that much. in our ESXi cluster connected to EMC VNX 8GB Fiber and a 10GB ISCSI Netapp I only see fractional difference between thin and thick – Robert The Architect Apr 25 '14 at 15:18
  • I would say you might have Hyper-V to blame - bad drivers, ineffective virtualization overhead handling etc. Are you using VHDX btw? – dyasny Apr 25 '14 at 15:55

1 Answers1

0

If you aren't using VHDX, you definitely should. Some more info can be found in the MS blog here

dyasny
  • 18,802
  • 6
  • 49
  • 64