3

I have two VMware servers - ESX + ESXi. Two backup NAS boxes. The current NAS boxes are low-cost and unsuitable for running VMs from. Support NFS only. Slow.

My plan is to have a dedicated iSCSI/NAS for storing and running VMs. Two additional low-cost boxes for backup.

I'm looking for advice regarding 2 things really:

  1. Recommendations as far as VMware architecture/design for a smaller organization. Less than 20 Virtual Machines. 2 servers + 2 x 1.5 terabyte backup NAS boxes.
  2. A good NAS/iSCSI box with your recommendation on RAID config ...I would go with 6 or better.

I'm trying to design an installation that is both fast and reliable/redundant. If you have any experiences to share or your current configuration including network design ( switches, fiber ...etc ), I will be enormously thankful. I'm not married to this idea, so if you have a design not using iSCSI NAS boxes ...let er rip. Cost? Can we stay around $5,000 ( on top of already stated components )?

Links to info are welcome also.

Thanks for reading!

Bubnoff

* UPDATE *

Thanks to all who responded. I began this post looking more at the NAS/SAN issue but am beginning to think that my main issue at this point is properly setting up our network for virtualization ...with all the equipment that entails. The slow NFS during testing is likely due to issues we've got on our network rather than protocol or device issues. We have a network consultant coming in this year and I now have more ammo to work with in getting it right.

Any other network examples, gotchas or advice is welcome. Thanks again.

Bubnoff
  • 415
  • 7
  • 18

4 Answers4

3

Don't ignore NFS.

ESX can use NFS just as easily as it can use FC / iSCSI, and NFS is a lot easier to live with depending on the rest of your infrastructure.

And, if you go with NFS, you can just get a dell / HP box with lots of storage, or a simple FC shelf and a pair of dell/HP boxes.

chris
  • 11,944
  • 6
  • 42
  • 51
  • I've got nothing against NFS. It's just slow in comparison to iSCSI. I've tested NFS on ESX and it's noticeably slow. A network guy tells me there's more on the wire with NFS. – Bubnoff Jan 14 '11 at 00:05
  • That's the subject of lots of discussion but my experience is that convenience trumps performance, and that you'll have plenty of performance with 10gig ethernet either way. – chris Jan 14 '11 at 03:03
  • 10gig ethernet by itself would probably put him over his budget. – ErnieTheGeek Jan 14 '11 at 15:38
  • 3
    @Bubnoff - we've tested NFS extensively based on NetApp FAS3140 kit and it totally kept up with iSCSI - I think the slowness you've seen may be implementation specific. – Chopper3 Jan 14 '11 at 15:47
  • Are you running Linux or something else that can boot from NFS? – ptman Jan 14 '11 at 16:13
  • VMware continue to support NFS for VMFS filestores - everything of theirs I have read says it works great. – dunxd Jan 14 '11 at 16:19
  • @ptman - You may be right. I'm basing a lot of my impressions on my tests on a non-optimized ( er ...worse than that actually ) network. Our network guru insists that NFS passes more over the wire and is busier than iSCSI and similar. – Bubnoff Jan 14 '11 at 17:15
  • I heard there's something about VMFS and locking, so in many cases NFS can be better than iSCSI. – 3molo Jan 16 '11 at 08:40
3

Your budget constraints make me think you should go the route of building a system yourself.

I've been a big proponent of Solaris ZFS-based solutions for NAS and/or iSCSI backends for VMware. With some of the fuzziness surrounding Oracle's acquisition of Sun, I've started using NexentaStor in client deployments. The platform is attractive because of inline compression, deduplication and the ability to present iSCSI storage as well as NFS. See the following article for ZFS platform information:

http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking

For the most recent installations, I've been using HP ProLiant DL180 G6 storage nodes and outfitting them with 24GB-48GB RAM, LSI 9211 SAS controllers to replace the onboard Smart Array RAID controllers, and a mix of solid-state (cache), 15k RPM and low-speed 7.2k RPM SAS disks, depending on the application/environment. Add some additional NICs (2 or 4-port gigE) and it's a good setup that is probably a step up from using a low-end appliance or raw Linux NFS.

Nexenta works well with the hardware (drive LEDs, HP agents, etc.) Using this solution, I'm at $5000-$8000 per storage node, depending on drive type. You wouldn't need something this involved, but if you do use a ZFS-based solution, ballpark system requirements for your arrangement should be 6 or more data drives using RAID 1+0 or RAID 5+0 (avoid RAID 6), 8+ GB RAM and multiple dedicated NICs for your storage network (on the ESX and storage node sides).

A commercial setup from PogoLinux may also work. I went the route of building my own because I prefer HP hardware, but there are some canned ZFS solution WITH SUPPORT available here:

http://www.pogolinux.com/products/storage_director

If this is too involved, your next option is something like an HP MSA P2000 SAN; perhaps one of the SAS-attached models like the 2312sa. It's a step up in price, though. Maybe ~$13k+ US for what you're looking for.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Thanks, I've been looking for ZFS solutions that weren't too hard-hitting on the budget. I'll look at what nexentastor has. – Bubnoff Jan 13 '11 at 23:59
  • Yeah, now that I think of it, I'd probably steer you towards the PogoLinux solution. The cost would be fairly close to your target,and definitely better than what you'd find in a lower-end appliance. – ewwhite Jan 14 '11 at 00:05
2

Whether you go for NFS or iSCSI, you should budget for dedicated networking equipment for storage. Don't run your storage on the same network as your servers or PCs normally connect to.

Buy a couple of 1Gb switches (10gb if you can afford it). Make sure you have two NICs per VMware host just for storage. Get Storage hardware with dual NICs. Make sure your storage and hosts are connected to both switches. If you have anything else that needs to talk to your storage, add NICs for that purpose. That way you are not contending

I am sure that RAID 5 will be fine. I'm sure you can find all the above for less than $5k (not including your VMware licensing).

dunxd
  • 9,632
  • 22
  • 81
  • 118
1

If you don't want to build yourself, check out the SnapServer from Overland. Lots of value for the dollar in these boxes from a company that is quite reputable in the storage arena.

http://www.overlandstorage.com/products/network-attached-storage/index.aspx#top

The N2000 starts at $5k.

B. Riley
  • 225
  • 1
  • 4
  • Thanks man, I really appreciate all the feedback on this. It's a big step for us. Better to get it right at the outset. – Bubnoff Jan 14 '11 at 17:20