0

We have many small applications running on a server running Windows Server 2003, such as Windows/NT Services, scheduled tasks, etc.

Recently, the server was changed to a clustered server with two nodes, administered with Microsoft Cluster Administrator, and upon installing an application requiring a reboot, we discovered that the reboot caused a fail-over to the other node of the cluster, and the application was not available on the other node.

Does this imply that we must deploy each application to each node in order to ensure high availability for each application? Does this essentially multiply the effort for applying application updates by the number of nodes in the cluster?

Also, I believe the cluster is set up so that the inactive nodes do not have access to the storage drives, so the cluster would have to be manually switched to each node before the applications could be installed/updated on that node.

This all seems to be counter-intuitive for having high-availability in a production environment, in that simple updates would require switching nodes and potentially reducing availability just to keep the nodes' application versions in sync.

There seems to be a lot of documentation on setting up clusters, but I'm having a hard time finding resources on how to design simple .NET applications for deployment to a clustered server environment. Perhaps I'm not asking the right questions...

Can anyone point me in the right direction?

Jeff
  • 101
  • 2
  • The right direction is modern technology that does clustering better. Server 2008 R2 or Server 2012, not a decade old OS. – HopelessN00b May 05 '14 at 08:34
  • `Does this imply that we must deploy each application to each node in order to ensure high availability for each application? Does this essentially multiply the effort for applying application updates by the number of nodes in the cluster?` - Yes and Yes. How would the application work if there was only a single copy and the host running that copy went down? As you saw, the application went down because it's not installed and running on the failover node. – joeqwerty May 05 '14 at 14:03
  • `This all seems to be counter-intuitive for having high-availability in a production environment, in that simple updates would require switching nodes and potentially reducing availability just to keep the nodes' application versions in sync.` - It seems perfectly intuitive to me. If I want my application to be available on nodeA and nodeB then I need to install it on nodeA and nodeB, whether they are active/active or active/passive. – joeqwerty May 05 '14 at 14:04
  • @HopelessN00bGeniusofnetwork Point taken regarding modern technology as the right direction. Generally speaking, I agree, however, do you happen to know if Server 2008 R2/2012 does in fact do clustering better, or do you simply believe it is *likely* to do it better? – Jeff May 06 '14 at 18:50
  • @joeqwerty I suppose I meant to say that I was surprised that the cost of having high availability with nodes was that you would be manually installing/updated each node of the cluster. In a scenario with many nodes, this seems like it would be creating a great deal more work for whoever is maintaining updates to applications. I would have thought that you could set it up so that one install/update operation on an application would propagate to all nodes. – Jeff May 06 '14 at 18:53
  • AFAIK, in the Windows OS, clustered applications must be installed on all of the cluster nodes. - `do you happen to know if Server 2008 R2/2012 does in fact do clustering better` - Do it better in what sense? – joeqwerty May 06 '14 at 18:59
  • @Lumirris I know for a fact that Server 2008R2/Server 2012 do clustering better. Which is to say that the feature is at least usable in those OS versions. Server 2003 was (for most products) the first generation of clustering under Windows, and as with most first generation things, it didn't work very well. (Some MS products had clustering under Server 2000, and there may have been one under NT 4.0, but either way, as the technology has matured, it's become much better, and under Server 2003, it was very immature, difficult to work with and... just flaky/unreliable.) – HopelessN00b May 06 '14 at 19:02
  • @joeqwerty `... do clustering better?` I was directing that at @HopelessNOOb's comment - was wondering if he meant that generally newer operating systems are better, or if Server 2008/2012 had actually improved its clustering capabilities. I'm wondering if there's a way to mitigate installation/updating x (N)nodes. – Jeff May 06 '14 at 19:04
  • @HopelessN00bGeniusofnetwork Thanks for the clarification. Unfortunately I will have to work with Windows Server 2003, so it sounds like it means updating each application on each node, and deal with whatever shortcomings it has. The decision to make the server clustered was made elsewhere, and it looks like you can't turn a blind eye to that change if you don't want to leverage the cluster - if you only install an application on one node, it will be unavailable at the next server restart or failover event. – Jeff May 06 '14 at 19:08
  • `do clustering better` - in what sense? Do what in clustering better? - `improved its clustering capabilities` - which capabilities exactly? In order to provide useful and insightful answers we have to have questions that can be quantified. Are oranges today better then they were last year? Better in what way? Bigger? Juicier? More immune to spoilage? Better isn't something that can be quantified on it's own. Is `what` better? `What` being the specific aspects of clustering that you want to know are better or not. – joeqwerty May 06 '14 at 19:09
  • @joeqwerty Not sure I agree with that, in this specific case. Having had to support some pretty big, HA, Server 2000 and Server 2003 clusters, I can't think of a single way that clustering hasn't been much improved in the modern OS platforms (Server 2008R2 and Server 2012). – HopelessN00b May 06 '14 at 19:12
  • I'm not trying to be difficult I'm just saying that if you tell us what aspects of clustering you want to know are better than we can provide answers. Can you cluster an application by installing it on one node in Windows Server 2008 r2 or Windows Server 2012? Not that I'm aware of. If that answers your `is it better/` question then I hope I've helped. – joeqwerty May 06 '14 at 19:12
  • @HopelessN00bGeniusofnetwork - I get it but it's a bit like saying that today's car engines are better than the engines of 25 years ago. Of course they are but that doesn't give me any specific information that I can use to make my case to the client/customer. `Q: Why should I upgrade my infrastructure to Windows Server 2008 R2/2012?` `A: Because it's better.` - I don't think I'd keep a lot of clients with that argument. – joeqwerty May 06 '14 at 19:15
  • @joeqwerty Yeah, that's a fair point. You do still need to install whatever feature or application you want to cluster on all your cluster nodes. – HopelessN00b May 06 '14 at 19:17
  • @joeqwerty To clarify, I'm locked into Server 2003 with Cluster Administrator, so for now, any improvements made in 2008/2012 are interesting to me personally, but that's about it. Essentially, I am looking for guidance/resources on how to properly deploy & update simple applications, running as scheduled tasks, windows/NT services and console applications, on a Server 2003 cluster - I asked [this question](http://stackoverflow.com/q/23460956/533958) on StackOverflow to inquire about any design practices when writing these applications. – Jeff May 06 '14 at 19:29

0 Answers0