I have created an EBS volume that I can attach to EC2 instances using Terraform, but I cannot work out how to get the EBS to connect to an EC2 created by an autoscaling group.
Code that works:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_instance.server.id
}
Code that doesn't work:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_launch_template.asg-nginx.id
}
What I am hoping for is an auto-scaling launch template that adds an EBS that already exists, allowing for a high-performance EBS share instead of a "we told you not to put code on there" EFS share.
Edit: I am using a multi-attach EBS. I can attach it manually to multiple ASG-created EC2 instances and it works. I just can't do it using Terraform.
Edit 2: I finally settled on a user_data
entry in Terraform that ran an AWS command line bash script to attach the multi-attach EBS.
Script:
#!/bin/bash
[…aws keys here…]
aws ec2 attach-volume --device /dev/sdxx --instance-id `cat /var/lib/cloud/data/instance-id` --volume-id vol-01234567890abc
reboot
Terraform:
data "template_file" "shell-script" {
template = file("path/to/script.sh")
}
data "template_cloudinit_config" "script_sh" {
gzip = false
base64_encode = true
part {
content_type = "text/x-shellscript"
content = data.template_file.shell-script.rendered
}
}
resource "aws_launch_template" "template_name" {
[…]
user_data = data.template_cloudinit_config.mount_sh.rendered
[…]
}
The risk here is storing a user's AWS keys in a script, but as the script is never stored on the servers, it's no big deal. Anyone with access to the user_data
already has access to better keys than the one you're using here keys.