2

I am trying to use terraform to create a r3.large instance in AWS.

Here's a snippet of my AMI definition in terraform.

resource "aws_instance" "centos-server" {

   ephemeral_block_device {
     device_name  = "/dev/xvdf"
     virtual_name = "ephemeral0"
   }

   user_data = "${file("./user-data.yml")}"
}

and my user-data.yml file

#cloud-config
device_aliases:
  'ephemeral0': '/dev/xvdf'
disk_setup:
  ephmeral0:
   table_type: 'mbr'
   layout: true
   overwrite: true
fs_setup:
 - label: ephemeral0
   filesystem: ext4
   device: ephemeral0
   partition: auto
mounts:
 - [ ephemeral0, "/media/ephemeral0", "ext4", "noatime", "0", "2" ]

When I ssh into the running instance, I can see the instance store with a "fdisk -l" but it's not formatted or partitioned.

Edit: Added a snippet of the cloud-init log

Cloud-init v. 0.7.5 running 'modules:config' at Wed, 07 Feb 2018 19:09:33 +0000. Up 41.76 seconds.
2018-02-07 19:09:33,600 - util.py[WARNING]: Activating mounts via 'mount -a' failed
grbonk
  • 609
  • 6
  • 22
  • Does the cloud-init log have anything useful to say? – ydaetskcoR Feb 06 '18 at 08:28
  • Not really. Just that the "mount -a" fails and I see that the /etc/fstab is modified by cloud-init and the mounting fails because the drive was not formatted first. – grbonk Feb 07 '18 at 18:37

1 Answers1

1

In this post, it mentioned that cloud-init on Amazon Linux does not support the fs_setup module. I think that's the reason why you got fail result.

https://stackoverflow.com/a/53194483/8431665

I think you could try to use bootcmd, mount and runcmd to format and mount EBS directly.

Carl Tsai
  • 380
  • 4
  • 9