1

I'm doing a build on my build server that can take advantage of multi-core processors, though I only have two - so it takes about 4 to 5 hours. There are several machines that us developers use for these builds in the lab, but most of the time they are idle. I would like to find a way to run my build by maybe harnessing the power of the servers in the lab. I did a little research and it seems one option would be to use some expensive vmware software to do it, but that would take some convincing to management to pay for.

Anyone have any general strategy for running a process like this across networked machines(they are all RHEL btw)? Any good places to start my research? Thanks.

3 Answers3

1

Have you looked into distcc? Depending on what type of "building" you're doing (you didn't specify), it may be a good match.

EEAA
  • 109,363
  • 18
  • 175
  • 245
0

Have a look at Hudson; it's essentially a generic job schedluer that makes it very easy to distribute job across multiple machines and aggregate the results back to a central point. You'll have to split up your build process into chunks that can be run on multiple machines, but hopefully this will be straightforward.

gareth_bowles
  • 9,127
  • 9
  • 34
  • 42
0

Any distributed resource manager(slurm,SGE,PBS,etc.) is capable of doing such things. They may how ever be some effort into setting that up correctly and you'll need to teach your user how to submit such build jobs.

The distcc approach is only suited if you don't have to take care of the remote machines load state.

pfo
  • 5,700
  • 24
  • 36