0

I'm running ubuntu on an EBS-backed EC2 instance.

In order to change the security group of my instance, I followed the instructions here for moving the ebs volumes to a new instance. Then I reassigned my elastic ip to the new instance.

Now ssh complains that the rsa key has changed, but I don't seen any mention of RSA key generation in the console log. Why does it do this? How can I get the "new" host RSA fingerprint or restore the "old" one?

Update: The procedure I detailed below is much more involved than necessary. The easiest way to manage ssh keys on a ubuntu ec2 server is to specify them at instance launch with user data.

Here's how I was able to get the new server RSA fingerprint:

  1. Run new EBS-backed instance, record new temporary RSA fingerprint from console log.
  2. Stop the new instance
  3. Detach EBS vol from new instance
  4. Attach old vol to /dev/sda1 on new instance
  5. Start the new instance with old volume attached. This is when, as Michael Lowman points out, the ssh_host_rsa_key was (silently) regenerated. If I had skipped straight to step 7, I should have seen the host_rsa_key from the old instance.
  6. Stop the new instance
  7. Detach the old volume from /dev/sda1 and re-attach to /dev/sdb
  8. Re-attach the new instance' original EBS boot volume to /dev/sda1
  9. Start the new instance, connect via SSH (RSA fingerprint should match the temporary one noted in step 1)
  10. Copy the new ssh_host_rsa_key.pub from the old EBS volume (now mounted on /dev/sdb) into my local known_hosts file.
  11. Stop the new instance, detach the new volume from /dev/sda1 and delete it.
  12. Detach and re-attach the old volume to /dev/sda1.
  13. Bring up the new instance
  14. ssh doesn't complain about the host RSA fingerprint

The question still remains: why did it change?

2 Answers2

1

The host key is generated on first boot of any instance. Init scripts are run at every boot that access the machine instance data. The initscript saves the instance id in a particular file: this way, if the file is absent or contains a different ID, the system initialization stuff is run.

That includes generating the host key (stored at /etc/ssh/ssh_host_{rsa,dsa}_key), downloading the user public key from the metadata and storing it in the authorized_keys file, setting the hostname, and performing any other system-specific initialization.

Since the determining factor is not the hard disk, but the (unique to each instance) instance ID, these things will always be done when you boot EBS volume attached to a new instance.

Edit:

I looked deeper into Ubuntu specifically and installed an ubuntu ami (3ffb3f56). I'm not a big ubuntu guy (usually prefer debian) so this was getting a little deeper into the ubuntu upstart-based init sequence than I usually go. It seems what you're looking at are /etc/init/cloud*.conf. These run /usr/bin/cloud-init and friends, which have lines like

cloud.sem_and_run("set_defaults", "once-per-instance",
        set_defaults,[ cloud ],False)

All the code's in python, so it's pretty readable. The base is provided by the package cloud-init and the backend for the scripts is provided by cloud-tools. You could look and see how it determines "once-per-instance" and trick it that way, or work around your problem with some other solution. Best of luck!

Michael Lowman
  • 3,604
  • 20
  • 36
  • Okay, I see. Any idea why the new key wasn't printed on the console log? – Aryeh Leib Taurog Jul 26 '11 at 21:27
  • Also, if I understand correctly, the determining factor is really the **match** between the metadata and the block device. Therefore it should be possible to update the EBS volume before detaching it from the old instance with the ID of the new instance so that it does not run system initialization when I bring it up on the new instance. Is this a bad idea? Where does this file live (in the canonical ubuntu amis)? – Aryeh Leib Taurog Jul 26 '11 at 21:34
  • I don't specifically know about the canonical amis; I'm going based on the capabilities available in ec2. As for the update, you're correct. but the instance id isn't generated until the ami is started so you won't be able to update your boot volume. It'd probably be best not to circumvent this update procedure but I can't think of a reason why it would hurt anything (off the top of my head). And I'm not sure why it isn't in the console. – Michael Lowman Jul 26 '11 at 21:41
  • I'm assuming I brought up the new instance first, then stopped the original instance. – Aryeh Leib Taurog Jul 26 '11 at 22:02
  • @Aryeh I updated my answer with some other information I dug up. You might be able to do as you suggest, although it sounds a little convoluted: could you deploy a new EBS instance from the base instead? If you have user-specific data, keep that on EBS. anyways, just a suggestion. – Michael Lowman Jul 26 '11 at 22:13
  • thanks much for your research efforts! I agree my approach is somewhat convoluted, but can you explain what you mean by deploy from the base? All I wanted to do was change the security group of the original instance, but since I apparently can't do that, I tried to replace it with another instance with the correct SG. Ideally, it would be identical in all other respects. Since I'm terminating the original instance, it would just be a whole lot easier if the RSA fingerprint could remain the same as well. – Aryeh Leib Taurog Jul 27 '11 at 17:14
  • I meant deploy a new ubuntu instance from the ami and copy over your data. I assumed you wanted this to be automated and repeatable; if not, you can just copy `/etc/ssh_host_*_key` to a temp location and overwrite the newly-generated ones after you get the new instance up and running. – Michael Lowman Jul 27 '11 at 19:13
0

(As far as I know,) EC2 images are initially accessible via the key-pair that you associate with them, regardless of the keys setup on the machine. Consider the scenario where you launch a public AMI - you don't have the private/public keys to access it - you generate a key-pair, associate it, and use the private key from the key-pair. Moreover, if you have an instance where you have lost access, reloading it on another instance will typically let you access it by setting a new key-pair.

It would stand to reason therefore, that at least one key (root) is set based on the key-pair at the time the image is launched.

(A side note - 'fingerprint' usually means the server signature - this varies on a per 'virtual' machine basis, regardless of other factors and is present to provide some assurance that you are connecting to the server you believe you are connecting to)

cyberx86
  • 20,805
  • 1
  • 62
  • 81
  • Perhaps it wasn't clear; my question is in fact about the server signature. My understanding is that when the instance first starts up, it generates the host RSA identity key for sshd, stores it in /etc/ssh/, and also writes it to the console. Since it is stored on the EBS boot volume, I don't understand why it changed when I used the same volume to boot a different instance. – Aryeh Leib Taurog Jul 26 '11 at 04:06
  • I think that same premise still applies - AWS sets the key (and therefore the fingerprint) when an instance first boots, even if it is the same key - if you have your old fingerprint, get the fingerprint of your public key (`ssh-keygen -lf /path/to/key`) and see if they match. I find that even starting from the same image, with the same key - I do get different fingerprints - I think that the process of importing the key actually alters it in some ways. I must say I am rather curious as to actual reason now. – cyberx86 Jul 26 '11 at 05:04
  • For interest sake, I did a quick experiment - I checked the fingerprint of /etc/ssh/ssh_host_rsa_key.pub - it matches what PuTTY displays when I first login. I then created a new instance - I was able to login with the same private key, but on checking that file, I got a different fingerprint - the files are not the same. Interestingly enough, the image I run (Amazon Linux) provides private keys in the same folder - they don't match between instances either - and none of them match the fingerprint from the keypair I use. – cyberx86 Jul 26 '11 at 05:26
  • Ahh - I think I've got it - the actual key your ssh key is authorized against is in /home/USERNAME/authorized-keys - it even explicitly lists the name of your keypair. As for the key changing, I believe that is the work of 'cloud-init'. So my hypothesis: you created a new instance, stopped it, attached your ebs root, and started it - when starting, cloud-init ran and changed the RSA key - giving a new fingerprint; however, the authorized-keys file contained the public key from your keypair, so you could login. – cyberx86 Jul 26 '11 at 05:42