0

I'm trying to make a highly available topology, by executing all of the cli commands that have been executed from one machine to another - Doing this will syncronize their configurations. Therefore, if the first machine breaks or something else happens to it, I can make a failover to the second one that will have the same configuration due to this monitoring & executing. To make it easier I'm only working with root user on both machines.

I am familiar with the history command in linux, but that way I will have to create my own scripts for monitoring and then executing the commands via ssh. Is there a service/project that can do such a thing, if not is there a easier way?

How can I make that happen in real time?

Kostadin Krushkov
  • 145
  • 1
  • 3
  • 8

1 Answers1

2

Apply configuration to both with your automation solution of choice. The commands are well defined as playbooks or recipes. Test going from a blank operating system image to a working, tested application. Excellent at repeating identical tasks on the test and production environments.

Backup data and operating system at a frequency to meet your desired recovery point objective. Test restores, obviously.

Or, shared storage clusters, where the important data is really on one set of LUNs failed over between hosts, are still a thing. Tend to be tricky to set up, however.

Interesting idea to record shell history and replay it. Tricky to capture in practice however, how are you going to replay an interactive editor session? What if some important configuration is done by a less-privileged not-root user? What happens if the network between them is partitioned, how does it track what commands have been run on each node?

John Mahowald
  • 32,050
  • 2
  • 19
  • 34