Our configuration is simple, crude even, but its effective for our purposes: all vSwitches (one for each VLAN) get all NICs. Each Host has four (4) NICs. The NICs are connected in pairs to two switches (Juniper EX4300s in our case). NICs 1 and 2 go to Switch A. NICs 3 and 4 go to Switch B. All switch ports have all the VLANs for that host (or rather all the hosts on that vCenter cluster), right now I think that's a total of 5 VLANs?
Bottom line, I yanked power completely from a switch during our initial testing phase of our build-out and with the exception of a few dropped packets while things reoriented themselves, it was seamless. No LACP necessary, etc. VMware's NIC teaming handled the aggregation and layer 2 with as much grace as I'd expect. The Virtual Networking Concepts PDF is a good, easy, fast read and gives you a great overview of vSwitches and how the different teaming policies behave.
Since our switches are in a virtual chassis, I can LACP across them, so where I can, I do use LACP links with half going to switch A and half going to switch B. I spent a lot of time trying to sort out a way to get this going on ESXi like you are trying to do because there was no way we were paying for Enterprise Plus to get VDS. In the end, our solution works as well as I could hope, on Standard none-the-less.