This scenario was also posted on SO, with different questions for different audiences - and I'm very glad I did as I've received some very good responses.
We are attempting to implement a development environment using virtualization for a small team of 4 developers within an enterprise organization. This would allow us to set up separate development, testing, and staging environments - as well as allowing access to new operating systems that are requirements for systems or tools we are evaluating.
We re-purposed an existing workstation-class machine, threw in 24GB RAM and RAID-10, and were doing fine until we attempted to get the machine added to the domain.
Now we are beginning the war that all enterprise developers since the beginning of time have had to fight - the fight for local control of a development and testing environment. The network and IT admins have raised a number of concerns ranging from "ESX Server is the enterprise standard" to "servers are not allowed on client VLANs" to "[fill-in-the-blank] is not a skill set currently possessed in the local or enterprise IT organization".
We could probably justify production-level hardware and formal IT support (read: we could justify the need if we had to, but it would take time and involve a whole lot of headache) - but it would likely take months to formally get IT resources assigned by treating this as a production system - and even if we did, we would likely lose the local control we want.
I imagine that many of you have had similar struggles with developers within your enterprise for developer control of non-production environments, so my questions are as follows:
- What arguments have your developers made that won you over to allow these types of silos to exist within enterprises which have standard network and security policies in place that would generally (and understandably) preclude this type of non-(centrally)-managed infrastructure?
- Is this just a matter of the developers making a technical or business justification and ensuring that patch management and AV is going to happen - or more of a political struggle for control and ownership?
- Given the choice, would you prefer to take ownership and support of the hardware/OS while giving devs local admin rights, or let them manage it entirely, while ensuring that they institute patch management/AV and charging them with responsibility should they cause problems?
- If you successfully blocked developers from having local control of "rogue servers" on your infrastructure, did the developers just make due or did they (or you) move the development environment to a disconnected VLAN/entirely separate network?
A couple assumptions to limit the scope of this question:
- To re-iterate, this is for a development environment - no production loads or supportability is required. Nothing externally accessible.
- This is not a Hyper-V vs. ESX holy war (we would be fine with either - but Hyper-V was selected since it is "free" with MSDN for these purposes [yes, VMWare has free tools too - but the good management tools generally aren't], and would be easier to manage by the local developers in a "Microsoft Shop") - so arguments for or against either are outside the scope of this question.
- The dev team has already made assurances to either manage patch management and antivirus, or integrate with the existing enterprise systems if IT will support it - but it is certainly within scope whether or not you are willing to accept that.