4

I have a few lab environments where the computers get rebuilt on a periodic basis but need to keep the same ssh host keys so that the people who connect to the lab computers (often from their own systems not under my administration) don't get "host key changed" errors every time we upgrade the the lab systems' OSes.

(This is similar to the question Smoothest workflow to handle SSH host verification errors? but in this case we've already decided that maintaining the same keys across system rebuilds is the best solution for us.)

Right now, I have a Puppet module named ssh that has the following clause in it:

file { "/etc/ssh/":
  owner   => 'root',
  group   => 'root',
  mode    => '0644',
  source  => "puppet:///modules/ssh/$fqdn",
  recurse => true,
  require => Package['openssh-server'],
  notify  => Service['sshd'],
}

On the puppet master, each host has its own directory that contains all of the host key files (ssh_host_key, ssh_host_dsa_key, ssh_host_rsa_key, etc.), as implied by the file resource definition. My problem is that this feels like mixing code and data (since all of the host keys live inside a module directory) and I have to make another commit to the Puppet VCS every time I add a new set of hosts. Is there a better way to manage these things?

asciiphil
  • 3,086
  • 3
  • 28
  • 53
  • why don't you make a script that when the machine load everytime, you rsync the ssh file with the host key to somewhere, that don't get changed ? – Valter Silva May 16 '13 at 19:11
  • When you rebuild a host, does it get the same Puppet Key/Certificate? I haven't implemented it, but I was strongly considering re-using the same RSA key I use for the puppet client as the key for the SSH server. So then you basically just have to copy/link the puppet client RSA key into the SSH directory, and call ssh-keygen to re-generate the public key. – Zoredache May 16 '13 at 19:48
  • Hm. Currently, I let the rebuilt host get a new certificate. (I have few enough new deployments that I let the clients submit self-generated keys to the master then then use `puppet cert sign` on the master to approve the keys.) I could probably get away with changing the deployment process, though it would add more steps (and require a one-time change of all of our ssh keys to derive them from Puppet). – asciiphil May 16 '13 at 20:13

2 Answers2

4

My problem is that this feels like mixing code and data (since all of the host keys live inside a module directory) and I have to make another commit to the Puppet VCS every time I add a new set of hosts. Is there a better way to manage these things?

You can adjust your fileserver.conf and basically share out another directory you use just for ssh config if you want to separate your puppet files from the SSH config/keys.

fileserver.conf
[sshconfig]
  path /srv/puppet/sshconfig/
  allow *

With a config like above your manifest could look like below, and files would be retrieved from /srv/puppet/sshconfig/$fqdn/.

file { "/etc/ssh/":
  source  => "puppet:///sshconfig/$fqdn",
  ...
}
Zoredache
  • 130,897
  • 41
  • 276
  • 420
  • I was unfamiliar with `fileserver.conf`; thanks! After perusing the [Puppet file serving guide](http://docs.puppetlabs.com/guides/file_serving.html), I went with a path of `/srv/puppet/sshconfig/%H` and sources of just `puppet:///sshconfig`. – asciiphil May 22 '13 at 14:27
0

My suggestion is for you to create a module or a script where you upload you know host to a external server, then you give an sort -u in this file, excluding any duplicated entry, then after that you upload this file to all your machines, of course taking care with the new entries in this ssh_known_hosts too.

Valter Silva
  • 190
  • 10