27

I want to upload multiple files to AWS S3 from a specific folder in my local device. I am running into the following error.

enter image description here

Here is my terraform code.

resource "aws_s3_bucket" "testbucket" {
    bucket = "test-terraform-pawan-1"
    acl = "private"

    tags = {
        Name  = "test-terraform"
        Environment = "test"
    }
}

resource "aws_s3_bucket_object" "uploadfile" {
  bucket = "test-terraform-pawan-1"
  key     = "index.html"
  source = "/home/pawan/Documents/Projects/"

}

How can I solve this problem?

Nimantha
  • 6,405
  • 6
  • 28
  • 69
pawan19
  • 425
  • 1
  • 5
  • 10

4 Answers4

58

As of Terraform 0.12.8, you can use the fileset function to get a list of files for a given path and pattern. Combined with for_each, you should be able to upload every file as its own aws_s3_bucket_object:

resource "aws_s3_bucket_object" "dist" {
  for_each = fileset("/home/pawan/Documents/Projects/", "*")

  bucket = "test-terraform-pawan-1"
  key    = each.value
  source = "/home/pawan/Documents/Projects/${each.value}"
  # etag makes the file update when it changes; see https://stackoverflow.com/questions/56107258/terraform-upload-file-to-s3-on-every-apply
  etag   = filemd5("/home/pawan/Documents/Projects/${each.value}")
}

See terraform-providers/terraform-provider-aws : aws_s3_bucket_object: support for directory uploads #3020 on GitHub.

Note: This does not set metadata like content_type, and as far as I can tell there is no built-in way for Terraform to infer the content type of a file. This metadata is important for things like HTTP access from the browser working correctly. If that's important to you, you should look into specifying each file manually instead of trying to automatically grab everything out of a folder.

meustrus
  • 6,637
  • 5
  • 42
  • 53
  • 1
    You can specify the `content_type` via a module: https://registry.terraform.io/modules/hashicorp/dir/template/latest – Flair Feb 13 '21 at 23:55
  • @Flair, that is an excellent link! It has a section specific to S3 that directly answers this question. I'd like to leave my answer alone since not everyone will need `content_type`. Could you please quote the S3 example code and link as a separate answer so it is more visible? – meustrus Feb 15 '21 at 23:35
  • Done! I have even provided a smaller snippet for `.tf.json`. – Flair Feb 16 '21 at 22:51
  • Note that `source_hash` is better than `etag` in the case of encryption. – sdgfsdh Jun 03 '22 at 22:14
22

You are trying to upload a directory, whereas Terraform expects a single file in the source field. It is not yet supported to upload a folder to an S3 bucket.

However, you can invoke awscli commands using null_resource provisioner, as suggested here.

resource "null_resource" "remove_and_upload_to_s3" {
  provisioner "local-exec" {
    command = "aws s3 sync ${path.module}/s3Contents s3://${aws_s3_bucket.site.id}"
  }
}
Kohányi Róbert
  • 9,791
  • 4
  • 52
  • 81
Vikyol
  • 5,051
  • 23
  • 24
  • 3
    This wouldnt use the same security creds as running the aws_s3_bucket_object examples. – Christian Feb 02 '22 at 15:22
  • 1
    It only works once, the first time your run "terraform apply". If you have done that and now you add/modify/delete files from the source, your files won't be synced again on the S3 bucket. This is because the "null_resource" already exists. So this is definitely not a proper solution to this problem. – cluxter Jul 12 '23 at 00:56
16

Since June 9, 2020, terraform has a built-in way to infer the content type (and a few other attributes) of a file which you may need as you upload to a S3 bucket

HCL format:

module "template_files" {
  source = "hashicorp/dir/template"

  base_dir = "${path.module}/src"
  template_vars = {
    # Pass in any values that you wish to use in your templates.
    vpc_id = "vpc-abc123"
  }
}

resource "aws_s3_bucket_object" "static_files" {
  for_each = module.template_files.files

  bucket       = "example"
  key          = each.key
  content_type = each.value.content_type

  # The template_files module guarantees that only one of these two attributes
  # will be set for each file, depending on whether it is an in-memory template
  # rendering result or a static file on disk.
  source  = each.value.source_path
  content = each.value.content

  # Unless the bucket has encryption enabled, the ETag of each object is an
  # MD5 hash of that object.
  etag = each.value.digests.md5
}

JSON format:

{
"resource": {
  "aws_s3_bucket_object": {
    "static_files": {
      "for_each": "${module.template_files.files}"
      #...
      }}}}
#...
}

Source: https://registry.terraform.io/modules/hashicorp/dir/template/latest

Flair
  • 2,609
  • 1
  • 29
  • 41
  • This should be the accepted answer. The [highest vote one](https://stackoverflow.com/a/58827910/5668956) doesn't resolve the content file type correctly. Which will cause a problem when trying to serve html file from S3 – ThangLeQuoc Jul 28 '23 at 02:00
2

My objective was to make this dynamic, so whenever i create a folder in a directory, terraform automatically uploads that new folder and its contents into S3 bucket with the same key structure.

Heres how i did it.

First you have to get a local variable with a list of each Folder and the files under it. Then we can loop through that list to upload the source to S3 bucket.

Example: I have a folder called "Directories" with 2 sub folders called "Folder1" and "Folder2" each with their own files.

- Directories
  - Folder1
    * test_file_1.txt
    * test_file_2.txt
  - Folder2
    * test_file_3.txt

Step 1: Get the local var.

locals{
  folder_files = flatten([for d in flatten(fileset("${path.module}/Directories/*", "*")) : trim( d, "../") ])
}

Output looks like this:

folder_files = [
  "Folder1/test_file_1.txt",
  "Folder1/test_file_2.txt",
  "Folder2/test_file_3.txt",
]

Step 2: dynamically upload s3 objects

resource "aws_s3_object" "this" {
  for_each = { for idx, file in local.folder_files : idx => file }

  bucket       = aws_s3_bucket.this.bucket
  key          = "/Directories/${each.value}"
  source       = "${path.module}/Directories/${each.value}"
  etag = "${path.module}/Directories/${each.value}"
}

This loops over the local var,

So in your S3 bucket, you will have uploaded in the same structure, the local Directory and its sub directories and files:

Directory
  - Folder1
    - test_file_1.txt
    - test_file_2.txt
  - Folder2
    - test_file_3.txt
SudoHaris
  • 41
  • 3