We have customers infrastructure hosted on physical HP boxes across several racks and finally we are planning to migrate them to Vsphere.Data is stored on NFS SAN.It was suggested to use c-Series Dell servers for hypervisors. In the past I have been involved in projects that used blades for virtualization and using c-Series feels like a unusual choice for me. I am trying to figure out what are the pros and cons of using blades or c-Series in our scenario.Does anyone has experience with c-Series and Vsphere deployments?
Asked
Active
Viewed 171 times
1 Answers
0
Well its probably fair to say that the C-series allows higher concentration of hosts in a rack, but perhaps less resources per host, in particular, network connections might be limited compared to those possible on a blade like the m610, for example.
That might or might not be an issue for you, only you know the exact details of your requirements.

Rob Moir
- 31,884
- 6
- 58
- 89
-
Not sure. Ehcek th new mico blades - quarter height. That is 32 (!) blades, 2 sockets each, in one center. Dell finally gets it. – TomTom May 28 '12 at 11:44
-
@tomtom yes I like the Dell blades, we've had various models of Dell blade for quite a while and overall I like what they're doing now. – Rob Moir May 28 '12 at 16:21
-
I am still to see Dell's answer to FlexFabric.I could not get a good answer from Dell sales rep on what they can suggest to replace explosion of the cables at the back of the chassis. – Sergei May 28 '12 at 16:36
-
That is a problem. The backs of the cabinets we've deployed blades into and maxed out the network connections into each blade look like party time at Cthulu's house. – Rob Moir May 28 '12 at 18:29
-
? Sorry? What about using some in chassis switches with 10gb going out, or an infiniband fabric? I fail to see how one could need more than a handfull of network cables PER CHASSIS ;) – TomTom May 28 '12 at 20:22
-
We're using 2 10Gb connections per blade as it is @tomtom plus some 1Gb conections. Nobody's going to be replacing these with a mobile phone server any time soon ;-) – Rob Moir May 28 '12 at 22:06
-
Ah? Infiniband has 40gigabit per cable ;) With 45ns (that is nano seconds) constant latency. 10G is SLOW like molass compared to that. – TomTom May 29 '12 at 07:56
-
@tomtom agreed but we have infrastructure in our server rooms that is built around 10Gb links, we can't afford to add infiniband infrastructure too. – Rob Moir May 29 '12 at 08:10