I am currently running builds and testing using Jenkins and a multi-configuration project, to compile and test our application across different OS computers.
In expanding our testing, I now need to run tests against a distributed environment and am not sure how to set this up on Jenkins.
I have four computers simulating my production environment (Prod1 - Prod 4) which have multiple instances of our application running as a service (foo.exe) on each computer. IE:
Prod1 Prod2
- Instance 1 - Instance 1
- Instance 2 - Instance 2
- Instance 3 - Instance 3
Prod3 Prod4
- Instance 1 - Instance 1
- Instance 2 - Instance 2
- Instance 3 - Instance 3
Prod 1 and 2 are one OS Version (Windows Server 2012) and the Prod 3 and 4 are a different OS (Windows Server 2016 / Linux, etc...). The services are meant to be able to reside on different computers for fault tolerance and scale-ability.
Before I run our tests, I need to have a script reach out to each computer and stop the existing service instances (batch file on Windows can do this) and copy in the new executable from a Jenkins build job, then start the instances again. This is easy enough with a multi-configuration job.
The instances on the servers are listening to a Apache ActiveMQ Queue for tasks to perform, which has its own server. I will send the test jobs to the ActiveMQ server. I am not sure how to configure the computers (PROD1-PROD4) to do the update and when complete all machines, then submit the tests to the different computer.
While the multiple job configuration can have me launch the same thing on the various nodes, I need to ensure that all 4 computers have been updated before I move on to launching the test scripts (SOAP UI) on the ActiveMQ server.
Would I create a multi-configuration job for the service update, then a pipeline job to chain these processes together? Is there a better way?