We have many small applications running on a server running Windows Server 2003, such as Windows/NT Services, scheduled tasks, etc.
Recently, the server was changed to a clustered server with two nodes, administered with Microsoft Cluster Administrator, and upon installing an application requiring a reboot, we discovered that the reboot caused a fail-over to the other node of the cluster, and the application was not available on the other node.
Does this imply that we must deploy each application to each node in order to ensure high availability for each application? Does this essentially multiply the effort for applying application updates by the number of nodes in the cluster?
Also, I believe the cluster is set up so that the inactive nodes do not have access to the storage drives, so the cluster would have to be manually switched to each node before the applications could be installed/updated on that node.
This all seems to be counter-intuitive for having high-availability in a production environment, in that simple updates would require switching nodes and potentially reducing availability just to keep the nodes' application versions in sync.
There seems to be a lot of documentation on setting up clusters, but I'm having a hard time finding resources on how to design simple .NET applications for deployment to a clustered server environment. Perhaps I'm not asking the right questions...
Can anyone point me in the right direction?