0

I have several ansible scripts which provision EC2 instances for various services like consul, kong, and some ECS tasks. I can currently filter down the instances using ec2_remote_facts: to gather the instances that need additional storage and using ec2_vol:.

- name: Gather EC2 Facts
  ec2_remote_facts:
    region: "{{ region }}"
    filters:
        "things" 
  register: info

- name: ec2-ebs
  ec2_vol:
    instance: "{{ item.id }}"
    volume_size: 5
    volume_type: gp2
    device_name: xvdd
    region: "{{ region }}"
    name: "ebs-{{ environmentName }}-{{ item.id }}"
  register: volumes
  with_items: "{{ info.instances }}"

- name: ecs-tag
  ec2_tag:
    resource: "{{ item.volume_id }}"
    region: "{{ region }}"
    state: present
    tags:
      Environment: "{{ environmentName }}"
  with_items: "{{ volumes.results }}"

My solution moving forward is to just use shell: to ssh into each machine individually and run a script amounting to: vgcreate blah xvdd; lvcreate blah-volume blah; format on each machine. However without a lot of work I don't know how to make it idempotent and if we scale there is less flexibility in a custom shell script.

I've found the ansible lvol: but it requires that I am running on the new ec2 machine. We are executing with hosts:localhost which I think means we are just using the aws-cli through ansible on my machine which would mean I can't actually reach the ec2 instance.

alexddupree
  • 323
  • 1
  • 13

1 Answers1

0

You can add tge generated hosts to the inventory dynamically using add_host.
Than run a playbook on the new inventory host/group to do the required configuration.

Note: there's a module the format filesystem as well as one to mount disks.

cohenjo
  • 576
  • 3
  • 5