I'm running ubuntu on an EBS-backed EC2 instance.
In order to change the security group of my instance, I followed the instructions here for moving the ebs volumes to a new instance. Then I reassigned my elastic ip to the new instance.
Now ssh complains that the rsa key has changed, but I don't seen any mention of RSA key generation in the console log. Why does it do this? How can I get the "new" host RSA fingerprint or restore the "old" one?
Update: The procedure I detailed below is much more involved than necessary. The easiest way to manage ssh keys on a ubuntu ec2 server is to specify them at instance launch with user data.
Here's how I was able to get the new server RSA fingerprint:
- Run new EBS-backed instance, record new temporary RSA fingerprint from console log.
- Stop the new instance
- Detach EBS vol from new instance
- Attach old vol to
/dev/sda1
on new instance - Start the new instance with old volume attached.
This is when, as Michael Lowman points out, the
ssh_host_rsa_key
was (silently) regenerated. If I had skipped straight to step 7, I should have seen the host_rsa_key from the old instance. - Stop the new instance
- Detach the old volume from
/dev/sda1
and re-attach to/dev/sdb
- Re-attach the new instance' original EBS boot volume to
/dev/sda1
- Start the new instance, connect via SSH (RSA fingerprint should match the temporary one noted in step 1)
- Copy the new
ssh_host_rsa_key.pub
from the old EBS volume (now mounted on/dev/sdb
) into my localknown_hosts
file. - Stop the new instance, detach the new volume from
/dev/sda1
and delete it. - Detach and re-attach the old volume to
/dev/sda1
. - Bring up the new instance
- ssh doesn't complain about the host RSA fingerprint
The question still remains: why did it change?