I have some custom system management scripts/applications that I would like to performance test against a large number of systems (10,000+) in an effort to simulate larger-scale enterprise networks.
The application runs on a server that will make network connections to desktop domain systems and perform various management tasks (wmi calls).
My idea is to leverage Microsoft Azure using some Azure automation scripts to create roughly 10,000 Windows 7/8.1 VMs that will join a domain in a private network. The application server will also be created in the Azure private network.
The goal of performance testing the application scripts is to ensure multi-threading, queuing, memory management, and overall performance of the application is working properly. There should not be a significant impact on network, disk I/O, or client memory/usage, so I would provision the bare minimum specs for the desktops.
I haven't used Azure before, but is this a standard approach to test what I need? Are there any limitations or issues I should be aware of? What do industry experts do when they need to test a large scale environment such as this? I have also looked at AWS and Google Compute Engine, but it doesn't look like I can create Windows 7/8.1 VMs in those environments (which is why I'm looking at Azure). I wish Azure also supported WinXP, but I can deal without it for now.
I don't currently have a large enough vSphere infrastructure to support that many VMs locally which is why I was looking to the cloud with a reasonable price point.
Any advice/guidance you can provide would be greatly appreciated.