5

I'm debating on doing an ESXi 4.1/5.0 installation on a USB stick or a pair of 32GB SATA II SSDs (RAID1 mirror). According to VMWare's documentation, when ESXi is booting it looks for a 4GB space for a scratch partition. Assuming the USB stick I use has adequate storage (say 16GB), is there any advantage to installing ESXi on an SSD or better yet, and SSD RAID1 mirror? Note: The documentation states that ESXi takes up 5.2GB of free space on disk; 5.2GB + 4GB = 9.2GB which is well below the 16GB for the USB stick.

I'd prefer to use the SSDs for something other than the ESXi installation. I think my main concern is if the USB stick dies for whatever random reason. Can I just reinstall ESXi on another USB stick and ESXi will pickup as if nothing happened?


According to VMWare's ESXi 5.0 Performance Best Practices (PDF Link):

You can optionally configure a special host cache on an SSD (if one is installed) to be used for the new swap to host cache feature. This swap cache will be shared by all the virtual machines running on the host, and host-level swapping of their most active pages will benefit from the low latency of SSD. This allows a relatively small amount of SSD storage to have a potentially significant performance impact

NOTE Using swap to host cache and putting the regular swap file in SSD (as described below) are two different approaches for improving host swapping performance. Swap to host cache makes the best use of potentially limited SSD space while also being optimized for the large block sizes at which some SSDs work best.

osij2is
  • 3,885
  • 2
  • 24
  • 31
  • Dell gives you an option if you are ordering a server only for ESXi. This option installs puts the ESXi on a SD card which the server boots to. – mdpc Aug 08 '12 at 22:27
  • Bear in mind that USB sticks are notorious for failing catastrophically, without warning and always at the most inconvenient time. I don't know how SSDs compare in that regard but believe their error detection and recovery is better. – John Gardeniers Aug 09 '12 at 03:12
  • Updated the post with some documentation about best practices for performance for ESXi 5.0 from VMWare. Very interesting and useful information. – osij2is Aug 09 '12 at 15:37
  • @JohnGardeniers: SSds are subject to the same as I know from recent experience. – user9517 Aug 09 '12 at 15:49

4 Answers4

10

We have some ESXi boxes that boot of SD/USB, it's ok, it works - nothing to write home about.

But if you use SSDs with v5.0U1 it'll use the SSD as swap space for very significantly improved system performance in memory contended situations. That said you need to make sure you use HCL-compliant SSD (same as every other component) and their cost would actually be better used to buy more memory to be honest.

Just stick to booting from a pair of HW mirrored small disks, that'll do just fine.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • I would definitely go for a pair of small regular disks too (or even a single disk), as the performance greatly increases (from my experience) with such a setup. The only reason why I have to run it from USB, is because I don't have any more room for more drives :( – Frederik Aug 08 '12 at 20:07
  • I think SSDs for swap would be the most effective use. – osij2is Aug 09 '12 at 15:31
7

You have several questions, so here goes from top to bottom:

  • Yes, you can reinstall ESXi on another USB and load up your VM's from the datastore. Network settings etc. will however be lost. (Maybe not if in a vCenter cluster - but I am not sure about that)

  • I am running ESXi on 4GB USB sticks without any trouble, so I dont really know about that scratch partition.

  • I would choose to use the SSD disks, as my experience tells me, that booting ESXi from a USB stick is terribly slow, which you don't want it to be when you are rebooting in the middle of the day during work hours.
    However, rebooting a ESXi server doesn't happen that often, and if you are using a vCenter cluster, the reboot time should not matter that much.

Note: I use ESXi 5U1

Frederik
  • 3,359
  • 3
  • 32
  • 46
  • Cool. Thanks for the input. After *carefully* rereading the documentation, apparently the scratch partition is *not* required. Doh. – osij2is Aug 08 '12 at 19:49
3

In some circumstances having those SSD drives as local scratch drives - for particular latency-intensive tasks, caching or even swap - can be a win. The SSD solution is quite a bit more expensive than the USB. If all you're going to do with the SSD is boot, then go with the USB.

Frankly, though, there are several ways to netboot VMW that work well and can relieve you of the need for any boot media.

rnxrx
  • 8,143
  • 3
  • 22
  • 31
3

I've done the USB, SD and SSD boot solutions for ESXi. If you're in a position where you have shared storage, the choice doesn't matter too much. It's a function of resources and your environment.

The best-practices/default is a RAID 1 pair of appropriately-sized drives. If you have a pair of 72GB 10k RPM disks laying around because other storage systems have been decommissioned, that's an easy choice.

I use HP servers and other systems that feature internal USB and SD card slots. If I have several hosts, leveraging the internal ports rather than disks saves me on RAID controller and physical disk costs. It also means that deployment can be quicker.

Using SSD's is possible. Maybe use a single disk or a mirrored pair... but don't rely on them for scratch/swap needs. If possible, invest in more RAM to counter a constrained resource condition.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Yeah, I'm tapped out at RAM (32GB ECC Registered) but according to these articles, looks like SSDs for swap really boosts performance *should* swap be necessary (SQL databases seem to benefit). http://techhead.co/vmware-esxi-vswapping-with-sandforce-ssds/ http://communities.vmware.com/people/chethank/blog/2009/12/22/using-solidstate-drives-to-improve-performance-of-sql-databases-on-vsphere-hosts-when-memory-is-overcommitted – osij2is Aug 08 '12 at 20:24
  • You didn't mention if you had shared storage or not. The 32GB RAM limit you mention makes it seem like this is a standalone ESXi system, possibly with internal storage. If that's the case, I wouldn't split the ESXi boot onto its own storage. I'd just carve off a small logical drive on your main internal array to accommodate boot. – ewwhite Aug 08 '12 at 23:20
  • I do have shared storage (iSCSI) but I also have local storage as well (x4 1TB SATA II). Not sure what to do with the 4 HDDs. I was initially thinking using it for ISO storage but that's overkill. Maybe put a few VMs locally and the rest on iSCSI. I do have 2 250GB SATA II disks just lying around. I could RAID1 those into a mirror for boot. – osij2is Aug 09 '12 at 15:27