I'm looking for a better infrastructure setup for managing and deploying internally developed applications which are executed periodically.
The current setup grew into an unmonitorable heterogeneous collection of applications, that can only be executed directly on the scheduler VM.
Current situation:
- Windows environment
- Bunch of Jobs written in PowerShell and as C# applications some containing rather complex logic, some perform ETL-Operations
- Jobs either configured as service or console applications triggered by default Windows scheduler and running on a dedicated VM
- Application specific logging into Log-Files (some applications)
- Configuration via app.config file for each C# console application
- Windows scheduler doesn't provide nice Web-GUI to watch and monitor job executions.
Ideal situation:
- Central monitoring: Overview of all jobs (when run, failures)
- Trigger manually via a web frontend
- Trigger job execution via API with possibility to check whether execution succeeded.
- Central job configuration (connection strings, configuration parameters)
Constraints:
- No cloud: Due to internal restrictions the software has to reside inside our own network. Our company owns a sufficiently dimensioned server rack for hosting required servers internally.
Considered Options
Azure WebJobs
From what I have read this would be exactly the solution I'm looking for. Due to our "no cloud" policy we'd need to host our own Azure Pack internally, which might require quite some effort to set up and is possibly a technical overkill for these requirements.
Self-written Web-API Project
Another option would be to write a dedicated Web-API project that contains all job functions, has one central configuration and exposes the job functions as Web-API Methods and using Quartz.net for scheduling.
However if possible I'd prefer to use some standard software, so I won't be responsible for maintaining yet another central piece of our infrastructure.
Which option would you choose? Or are there any better alternatives?