6

I have

  • a usual jenkins master jettysburg:8888/jenkins,
  • and a failover jenkins master jettyperry:8888/jenkins.

I am planning to use mercurial to keep the jobs directory of the two masters in sync.

Usually, most of the time, new jobs and builds are defined and executed at jettysburg:8888. Then I would need to sync jettyperry:8888 with whatever took place in jettysburg:8888, and I plan to perform the push once a day.

Which means, after a failover to jettyperry:8888, the push would be performed the converse direction.

I have been using mercurial for my non-programming/coding files (like word, excel and text files) for that purpose, as a means to perform incremental and redundant backup of my "mission critical" files, as well as versioning them.

I am also hoping to depend on mercurial to backout from changes made to jenkins jobs.

Is using mercurial to sync two Jenkins masters a good idea? Is there a better way to keep two Jenkins servers in sync? In this case, I am syncing only the jobs tree.

Blessed Geek
  • 21,058
  • 23
  • 106
  • 176

2 Answers2

4

As long as you don't need shared job state between the servers (they run in their own little universes) and you keep the same plugin modules and libraries on both Jenkins servers, using some form version control to keep the actual job definitions is fine.

My office does this with git. We have a development and a production set of jenkins servers. We maintain a base linux image with jenkins install with all necesarry modules and locally install libraries (like nodejs and such). We then spin up an instance of the image and pull down the jobs.

The one thing that can be a challenge is keeping in sync things like credentials and Jenkins config settings--you might need to keep them as part of the base image.

If you need the job queues to persist and be shared (like a master-master setup) you can look at this plugin, which allows multiple jenkins masters to share the same job queue: https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin

Ray
  • 40,256
  • 21
  • 101
  • 138
  • it will be /var/lib/jenkins/jobs/* which will go to repository and rest will be part of image, including plugins. And when base image is brought up , we pull jenkins jobs xmls from repository and we are done. Is this correct understanding ? – vaibhavnd Jul 14 '16 at 03:33
  • 1
    @vaibhavnd yes, as long as every time you modify the base image to change permissions or add a plugin, etc..., you re-launch all running instances you should be fine sharing the jobs via git. it you don't relaunch, you may add a job that relies on a plugin or library not on a running instance. – Ray Jul 14 '16 at 13:22
0

There are two approaches to this:

  1. Instead of approaching a solution with hot back-up why not consider clustered master so you get an active-active solution. You may want to look at https://wiki.jenkins-ci.org/display/JENKINS/Gearman+Plugin. This helps you cluster a bunch of masters so one going down should not be an issue.

  2. Consider running Jenkins on containers, and externalize jenkins projects directory as external volumes in an NFS so you can bring up another container when one goes down - keeping both containers running will be a challenge with concurrent writes (if there are any).

Hope this helps.

Prasanna
  • 29
  • 4