0

I had an Ubuntu Lucid 10.04 instance running on an m1.small in AWS us-east region. A while back it had some issues (I think it was an AWS problem) anyway we started up a new one, attached our separate data volumes to the new instance and everything was back to normal.

There were some files on the boot volume of the old instance that I wanted to get access to so I stopped the old instance detached the boot volume and mounted this old boot volume to the new instance on /dev/sdg to get access to it. Unfortunately I didn't tidy up afterwards and the volume remained attached and mounted.

Yesterday I had to reboot the new instance (AWS said that some scheduled maintenance required this and I could do it manually before they did it)

Once it was rebooted I noticed some problems with it. ie: new users added recently are not shown in /etc/passwd only older users are there.

From what I can tell it seems that the instance has rebooted from the old boot drive.

So how do I…

1) Determine which volume I'm actually booted from to confirm my suspicions?

2) Tell the instance which volume to boot from (if I'm right)?

Regards

Paul

Paul Willis
  • 316
  • 1
  • 6

1 Answers1

0
  1. You can determine the root device using:

    ec2-describe-instance-attribute INSTANCE_ID --root-device-name

    Sample output:

    rootDeviceName  i-xxxxxx      /dev/sda1

    You can then determine which 'block device' this maps to using:

    ec2-describe-instance-attribute INSTANCE_ID --block-device-mapping

    Sample output:

    BLOCKDEVICE     /dev/sda1       vol-xxxxxxxa    2011-11-13T21:09:53.000Z
    BLOCKDEVICE     /dev/sdf        vol-xxxxxxxb    2011-11-13T21:09:53.000Z
    BLOCKDEVICE     /dev/sdg        vol-xxxxxxxc    2011-11-13T21:09:53.000Z

    (Of course, you could just use df or mount to determine the root device, and then look at the block device mappings).

  2. To change the root device, you have two approaches:

    a. Stop the instance, detach the incorrect root volume, attach the correct root volume as the same device (e.g. /dev/sda1), and restart the instance. The change should persist through restarts, but not through terminations as you haven't modified the image on which it is based.

    b. Modify the image and launch a new instance using that image. To do so, run:

    ec2-register -s snap-xxxxxxxa -name “AMI_NAME″ –root-device-name /dev/sda1 --block-device-mapping "/dev/sda1=snap-xxxxxxxa"

    As far as I know, there is no way to change the 'root-device-name' of an instance once it has been launched.

cyberx86
  • 20,805
  • 1
  • 62
  • 81
  • Stupidly I tried 2a before your answer arrived. The instance would not respond when I tried to start up off the correct volume on the correct device, checked using your advice in (1) above.The management console said it was running but SSH connections were refused. I decided to launch a new server. If I find any more info on the onl instance I'll post here. Thanks for the help. I'll know next time – Paul Willis Dec 08 '11 at 18:35