2

I am trying to copy files to Windows EC2 instance through cloud init by passing it through user data, the cloud init template runs, it creates a folder but doesnot copy the files, can you help me understand what I am doing wrong in my code.

this code is passed through launch configuration of an autoscaling group

data template_cloudinit_config ebs_snapshot_scripts {
  gzip          = false
  base64_encode = false

  part {
    content_type = "text/cloud-config"
    content      = <<EOF
<powershell>
$path = "C:\aws"
If(!(test-path $path))
{
      New-Item -ItemType Directory -Force -Path $path
}
</powershell>

write_files:
   -  content: |
     ${file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/1-start-ebs-snapshot.ps1")}
       path: C:\aws\1-start-ebs-snapshot.ps1
       permissions: '0744'
   -   content: |
     ${file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/2-run-backup.cmd")}
       path: C:\aws\2-run-backup.cmd
       permissions: '0744'
   -   content: |
     ${file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/3-ebs-snapshot.ps1")}
       path: C:\aws\3-ebs-snapshot.ps1
       permissions: '0744'
EOF
  }
}
mellifluous
  • 2,345
  • 2
  • 29
  • 45

2 Answers2

5

Your current approach involves using the Terraform template language to produce YAML by concatenating together strings, some of which are multi-line strings from an external file, and that will always be pretty complex to get right because YAML is a whitespace-sensitive language.

I have two ideas to make this easier. You could potentially do both of them, although doing one or the other could work too.


The first idea is to follow the recommendations about generating JSON and YAML from Terraform's templatefile function documentation. Although your template is inline rather than in a separate file, you can apply a similar principle here to have Terraform itself be responsible for producing valid YAML, and then you can just worry about making the input data structure be the correct shape:

  part {
    content_type = "text/cloud-config"

    # JSON is a subset of YAML, so cloud-init should
    # still accept this even though it's jsonencode.
    content = jsonencode({
      write_files = [
        {
          content     = file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/1-start-ebs-snapshot.ps1")
          path        = "C:\\aws\\1-start-ebs-snapshot.ps1"
          permissions = "0744"
        },
        {
          content     = file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/2-run-backup.cmd")
          path        = "C:\\aws\\2-run-backup.cmd"
          permissions = "0744"
        },
        {
          content     = file("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/3-ebs-snapshot.ps1")
          path        = "C:\\aws\\3-ebs-snapshot.ps1"
          permissions = "0744"
        },
      ]
    })
  }

The jsonencode and yamlencode Terraform functions know how to escape newlines and other special characters automatically, and so you can just include the file content as an attribute in the object and Terraform will encode it into a valid string literal automatically.


The second idea is to use base64 encoding instead of direct encoding. Cloud-init allows passing file contents as base64 if you set the additional property encoding to b64. You can then use Terraform's filebase64 function to read the contents of the file directly into a base64 string which you can then include into your YAML without any special quoting or escaping.

Using base64 also means that the files placed on the remote system should be byte-for-byte identical to the ones on disk in your Terraform module, whereas by using file into a YAML string there is the potential for line endings and other whitespace to be changed along the way.

On the other hand, one disadvantage of using base64 is that the file contents won't be directly readable in the terraform plan output, and so the plan won't be as clear as it would be with just plain YAML string encoding.


You can potentially combine both of these ideas together by using the filebase64 function as part of the argument to jsonencode in the first example:

        # ...
        {
          encoding    = "b64"
          content     = filebase64("${path.module}/../../../../scripts/aws-ebs-snapshot-ps/1-start-ebs-snapshot.ps1")
          path        = "C:\\aws\\1-start-ebs-snapshot.ps1"
          permissions = "0744"
        },
        # ...
Martin Atkins
  • 62,420
  • 8
  • 120
  • 138
-1

cloud-init only reliably writes files, so you have to provide content for them. I'd suggest storing your files in S3 (for example) and pulling them during boot.

Sorry for incoming MS-Linux mixup example.

Using same write files write a short script, eg.

#!/bin/bash
wget something.ps1
wget something-else.ps2

Then using runcmd/bootcmd run the files:

bootcmd:
  - ./something.ps1
  - ./something-else.ps2

Job is done, w/o encoding/character-escaping headache.

Alexander
  • 39
  • 7