7

I've had a Linux t1.micro running a small Apache/PHP/Postgresql website for a couple of years now (referred herein as "original instance"). Works like a charm.

I wanted to migrate the site to the new (cheaper) t2.micro instance. I couldn't find any step by step instructions for how to do this, but took note of this and this.

My basic approach was

  1. Create snapshots of the two volumes (root and a data drive for the postgresql data) used by the original instance
  2. Create a new HVM AMI from the root and data snapshots just taken (ELASTIC BLOCK STORE > Snapshots > select original instance root snaphot > Actions > Create Image), with
    • Architecture = x86_64, same as the original instance
    • Root device and data volume device names = same as the original instance
    • Virtualization type = Hardware-assisted virtualization (HVM, for compatibility with the new t2 VM format)
    • Kernel ID = "Use default" (I tried to used the same Kernel ID as the original instance, but the AMI create failed saying it couldn't use that for a HVM AMI)
  3. Create and launch a new instance with the AMI just created at step 2

Problem: upon starting this new instance, it shuts down immediately with a Client.InstanceInitiatedShutdown error. How can I troubleshoot this?

Am I naive to think I can use a root snapshot that works in a PVM environment in an HVM environment? Is there an easier way to migrate from t1 to t2?

I'm hoping not to have to re-build my server in t2 from scratch and migrate data manually (I didn't use any automated build scripts).

Edit: I ended up rebuilding the t2 instance from scratch :P

poshest
  • 193
  • 1
  • 7
  • I had a very similar problem recently, but it was with volume types instead of instance families. I was attempting to create a new instance with the new "gp2" type SSD volumes from the AWS Powershell utilities. The instance would be created, and then terminate immediately with an InstanceInitiatedShutdown. I was eventually able to solve the issue by explicitly defining a "gp2" volume, and setting it to devicename `'/dev/sda1'` in a block device mapping. You may have to likewise investigate and modify your root volume. – Anthony Neace Jul 24 '14 at 14:48
  • Thanks @HyperAnthony. I'm not sure I have the skills for that and may do more damage than good. :P I found this though ["There is no easy way of changing the vitalization type from PV to HVM... Your best solution would be to spin up a new instance and migrate your data."](https://forums.aws.amazon.com/message.jspa?messageID=558479) – poshest Jul 25 '14 at 21:42
  • 1
    See also https://stackoverflow.com/questions/26676933/migrate-from-t1-micro-to-t2-micro-amazon-aws – lid Apr 26 '16 at 07:14

1 Answers1

6

I was going from HVM to PV and noticing the same issue. It turned out that I (my automation) was still attaching block storage to /dev/sda1, what I needed for my HVM AMI, but needed to attached it to /dev/xvda.

George IV
  • 375
  • 3
  • 11
  • For me the same thing happens on an LC. For some reason AWS doesn't match mount with AMI when it validates the LC. – eco Apr 12 '19 at 18:15
  • I had the same issue on a HVM instance (latest HVM Amazon Linux kernel 5.10) ami-0f9fc25dd2506cf6d. Had to switch from `sda1` to `xvda`, which was easy enough to verify by manually launching and checking the volume device in the console. – Akom May 02 '22 at 15:16