Here's the scenario.
We have a Windows Server 2012 R2 Hyper-V Edition hosting three NLB 2012 R2 web servers with our product on them. They receive XML data from connected public servers and work fine.
We have a web service that checks for xDoS attacks (this looks like an endpoint blasting out logical duplicate XML messages to the cluster) or buggy endpoints that send then in "out of control loops. The receivers on each of the IIS servers call the web service to see if they should process or deny a message a given message. Needless to say this must be very fast and since this is a cluster there either has to be:
- a common place where data is stored about past received messages
- a common web service where this data is held in memory
The logical place for the first option would be a database, but that option would slow down the process too much. We are getting thousands of messages per second and can't add that overhead to each.
A common web service works pretty well, but when a server in the cluster goes down, the choreography between the remaining servers is pretty complex - they need to decide who gets to be the master, and when the server than went down comes back up, it needs to rejoin the others, etc...
Here's the issue...
Since all the VMs are stored on a common Hyper-V host, could we just add IIS to the host and run the xDoS checker web service there? That web service is always only going to be called from the IIS servers hosted on it (no public access). This would mean no choreography because the only time it would go down is when all of the others were going down anyway.
Here's the question...
Bad practice? If so, why? For me, "because someone else doesn't do it that way" or some variant of that is not a good reason for why, by the way.
Many thanks, Rob