0

I have a resource that am creating in Terraform. Within the resource there is an attribute that is using JSON file to read in values. I am reading in these values from a separate JSON file and want to declare the attribute in conjunction with my Terraform Workspace. Below is my resource and error message. If it is possible to integrate terraform workspaces within the file function, any insight on how to achieve this would be helpful.

Terraform Resource

resource "aws_ecs_task_definition" "task_definition" {


family                   = "${var.application_name}-${var.application_environment[var.region]}"
  execution_role_arn       = aws_iam_role.ecs_role.arn
  network_mode             = "awsvpc"
  cpu                      = "256"
  memory                   = "512"
  requires_compatibilities = ["FARGATE"]
  container_definitions    = file("scripts/ecs/${terraform.workspace}.json")
}

Terraform Error

Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal object into Go value of type []*ecs.ContainerDefinition

on ecs.tf line 26, in resource "aws_ecs_task_definition" "task_definition":
  26:   container_definitions    = file("scripts/ecs/${terraform.workspace}.json")

I am looking to approach it this way because I have multiple Terraform workspaces set up and would like to keep my TF scripts as identical as possible.

Container Definition

{


"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/fargate-devstage",
          "awslogs-region": "us-east-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": [
        "[\"sh\"",
        "\"/tmp/init.sh\"]"
      ],
      "portMappings": [
        {
          "hostPort": 9003,
          "protocol": "tcp",
          "containerPort": 9003
        }
      ],
      "cpu": 0,
      "environment": [],
      "mountPoints": [],
      "volumesFrom": [],
      "image": "xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/fargate:latest",
      "essential": true,
      "name": "fargate"
    }
  ],
  "placementConstraints": [],
  "memory": "1024",
  "compatibilities": [
    "EC2",
    "FARGATE"
  ],
  "taskDefinitionArn": "arn:aws:ecs:us-east-2:xxxxxxxxxxxx:task-definition/fargate-devstage:45",
  "family": "fargate-devstage",
  "requiresAttributes": [
    {
      "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
    },
    {
      "name": "ecs.capability.execution-role-awslogs"
    },
    {
      "name": "com.amazonaws.ecs.capability.ecr-auth"
    },
    {
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
    },
    {
      "name": "ecs.capability.execution-role-ecr-pull"
    },
    {
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
    },
    {
      "name": "ecs.capability.task-eni"
    }
  ],
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "networkMode": "awsvpc",
  "cpu": "512",
  "revision": 45,
  "status": "ACTIVE",
  "volumes": []
}
Dave Michaels
  • 847
  • 1
  • 19
  • 51
  • 1
    I would have expected a different error if it couldn't open the file. It sounds like it can't parse the file. Are you sure it's a valid container definition? If you hard-code the path temporarily without the workspace interpolation does it work? You could also try the `local_file` datasource instead https://registry.terraform.io/providers/hashicorp/local/latest/docs/data-sources/file – Mark B Sep 23 '21 at 21:05
  • 1
    Absolutely agree the error message implies the JSON structure is not recognized as valid for the ECS container definitions. Please provide an example JSON in the question. – Matthew Schuchard Sep 23 '21 at 21:07
  • Yeah actually I did try it by hard coding. There is an error with my container definition file. – Dave Michaels Sep 23 '21 at 21:07
  • I have added the container definition json file it is complaining about. – Dave Michaels Sep 23 '21 at 21:19
  • I have discovered other issues pertaining to this issue. I copied over this json after creating this Task definition via the console. Still not sure why TF is having issues with it. – Dave Michaels Sep 23 '21 at 22:02
  • 1
    It's unfortunate that the AWS provider is just passing through what looks like a raw error message from Go's JSON library here, rather than something in JSON/ECS terms, but I agree that this seems to be the provider rejecting the JSON in the file that was successfully loaded, not an error actually loading the file. – Martin Atkins Sep 23 '21 at 22:28
  • That error originates [in the provider's validation rule for that argument](https://github.com/hashicorp/terraform-provider-aws/blob/0d3c743db3d6a7e5bc7c37586640f37564b16a8a/aws/resource_aws_ecs_task_definition.go#L1081:6), so it is indeed a validation error rather than a file loading error. – Martin Atkins Sep 23 '21 at 22:29
  • I think the clue here is that the error message says `[]*ecs.ContainerDefinition`, which is the Go equivalent of a JSON array of objects conforming to a particular schema. Your definition file contains only an object, not an array of objects. – Martin Atkins Sep 23 '21 at 22:31

2 Answers2

1

You have to provide only container definition, not entire task definition in container_definitions. So your json would be something along:

 [
    {
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/fargate-devstage",
          "awslogs-region": "us-east-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": [
        "[\"sh\"",
        "\"/tmp/init.sh\"]"
      ],
      "portMappings": [
        {
          "hostPort": 9003,
          "protocol": "tcp",
          "containerPort": 9003
        }
      ],
      "cpu": 0,
      "environment": [],
      "mountPoints": [],
      "volumesFrom": [],
      "image": "xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/fargate:latest",
      "essential": true,
      "name": "fargate"
    }
  ]

All other task related things, such as task execution role, cpu, memory, etc. must be provided directly in aws_ecs_task_definition resource, not in container_definitions.

Marcin
  • 215,873
  • 14
  • 235
  • 294
0

There are many ways you can approach this, however in my opinion the best one is using template_file data source using variable replacement

here is an example on how you can use it

data "template_file" "task_definiton" {
  template = file("${path.module}/files/task_definition.json")

  vars = {
    region                             = var.region
    secrets_manager_arn                = module.xxxx.secrets_manager_version_arn
    container_memory                   = var.container_memory
    memory_reservation                 = var.container_memory_reservation
    container_cpu                      = var.container_cpu
  }
}

resource "aws_ecs_task_definition" "task" {
  family                   = "${var.environment}-${var.app_name}"
  execution_role_arn       = aws_iam_role.ecs_task_role.arn
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = var.fargate_ec2_cpu
  memory                   = var.fargate_ec2_memory
  task_role_arn            = aws_iam_role.ecs_task_role.arn
  container_definitions    = data.template_file.task_definiton.rendered
}

note how the data source is used with the rendered method so you retrieve the actual file output with variables interpolated

data.template_file.task_definiton.rendered

For template format and more info about template files you can refer to terraform's official documentation here https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/file

Edit 1: I have to also say that if you want to do this approach you must define the variables required by your template and terraform resources for the workspace.

Diego Velez
  • 1,584
  • 1
  • 16
  • 20