5

Ok, since this got put on hold, trying to reword it to fit the format better.

Business problem: As part of our automated install process of baremetal machines, we need to do some basic pre-work on the system before it can be configured. This mostly consists of configuring the hardware raid and talking to the light's out management. We have a large mix of hardware - everywhere from HPDL170's to blades, to Dell R6 and R8 series to FC630's.

Process so far: Currently, the automated process registers the system with one of our Cobbler servers and assigns it a maintenance profile. It then PXE boots into the RHEL6u5 boot iso and run some scripts via anaconda and kickstart. It then talks to the Cobbler server and flips the profile to the real OS profile we wish to install. Then the goal is to tell the system to rePXE via IPMI and reboot, which it'll then go install itself with the given OS. The end installation OS can either be Linux or Windows, depending on the customer. This is all part of a larger automated process for deployments of new baremetal environments.

There are, however, issues with this.

  1. Putting packages into anaconda's stage2 image isn't always the easiest, especially if those packages have lots of dependencies.

  2. Anaconda's %pre and %post environment don't work well with certain kernel modules.

  3. Trying to do RAID during Anaconda's %pre is problematic because rescanning the bus during the pre generally results in an out-of-order disk layout.

My idea was to try and use a livecd type distro to do these tasks, like Tiny Core or RancherOS and a utility container ( much like Hanlon works ). However, getting things like IPMI to work in those isn't always the easiest and some of them are preconfigured for specific tasks, like Hanlon. We may have need, in the future, to keep extending this to include more things ( like firmware updates, BIOS settings, etc ).

Has anyone done something similar and how did you solve it?

sjmh
  • 151
  • 5
  • 1
    The title of your question still screams off topic product recommendation, but your question is now a much better fit. Are you using brand name servers exclusively ? Because for there may be suitable vendor tools. Also see [this answer](http://serverfault.com/a/641415/37681) – HBruijn Jan 31 '16 at 10:25
  • 1
    @HBruijn - Sorry about that, forgot to edit the title. We use both Dell and HP systems - the tools are generally a pain ( such as Dell's OEM ) to fit into the automated process and I'd rather not have two different flows for different types of systems. – sjmh Jan 31 '16 at 20:40
  • This problem has been solved... it's no longer a mystery to be able to automate bare-metal deployments... However, we're missing the _details_ of your environment like... server type/manufacturer/model. – ewwhite Jan 31 '16 at 20:59
  • 1
    Honestly, your best bet is to boot an ISO of something like Windows PE over PXE. I've done this in a lot of Linux shops. Every device (BIOS, NIC, RAID, SSD/HD, etc) has driver updates that run in Windows. Not all have Linux equivalents. – diq Jan 31 '16 at 21:09
  • @ewwhite - I've provided some details, but I'm not sure it's super helpful. In essence, because of how heterogenous our infrastructure is, I need a generic method for doing this, which means easy to iterate over and broad in nature. If you've got any information how this is solved, please feel free to share it. – sjmh Feb 01 '16 at 01:43

1 Answers1

2

Use the specific tools for the platforms in your environment.

For HP, that's going to be hponcfg, hpssacli, and the HP Smart Scripting Toolkit.

You can load HP bios settings via XML config.

I'd suggest some hardware detection... at my last environment, we even had a simple process for the datacenter folks where they choose the vendor/server type and initiated the proper environment prep.

ewwhite
  • 197,159
  • 92
  • 443
  • 809