10

What is the easiest way to provide one or several external configuration file(s) to an app running as an AWS Fargate task?

  • The files cannot be part of the Docker image because they depend on the stage environment and may contain secrets.
  • Creating an EFS volume just for this seems to be overengineered (we only need read access to some kb of properties).
  • Using the AWS SDK to access a S3 bucket at startup means that the app has a dependency to the SDK, and one has to manage S3 buckets.*
  • Using AWS AppConfig would still require the app to use the AWS SDK to access the config values.*
  • Having hundreds of key-value pairs in the Parameter Store would be ugly.

*It is an important aspect of our applications to not depend on the AWS SDK, because we need to be able to deploy to different cloud platforms, so solutions that avoid this are preferable.

It would be nice to just be able to define this in the task definition, so that Fargate mounts a couple of files in the container. Is this or a similar low-key solution available?

Tim van Beek
  • 175
  • 1
  • 12

3 Answers3

3

There's a specific feature of AWS Systems Manager for that purpose, called AWS AppConfig. It helps you deploy application configuration just like code deployments, but without the need to re-deploy the code if a configuration value changes.

The following article illustrates the integration between containers and AWS AppConfig: Application configuration deployment to container workloads using AWS AppConfig.

Dennis Traub
  • 50,557
  • 7
  • 93
  • 108
  • 1
    Thanks, but this still needs the AWS SDK to access the AppConfig Service. I will update my question to explain why this is a drawback. – Tim van Beek Dec 08 '20 at 18:13
  • If you want to be able to deploy to multiple platforms, you’ll have to live with the least common denominator. That means you can’t leverage the benefits of any cloud platform and probably have to build pretty much everything yourself. It always is a trade-off: Do you want to have all the building blocks available and leverage the benefits of the breadth and depth of highly integrated AWS services, or do you want to be platform-agnostic and use the cloud as just another data center, having to build, manage, and maintain everything yourself. – Dennis Traub Dec 08 '20 at 18:33
  • Don’t get me wrong, both choices are valid. One involves just much more work. If your ability to potentially deploy to a different platform some time in the future justifies doing all the undifferentiated heavy lifting yourself, you can do so. But then you won’t be able to use the low-key tools available in the cloud. – Dennis Traub Dec 08 '20 at 18:39
  • @Dennis Traub, obviously staying platform agnostic is very important in this case looking at the question. I fully understand if projects want to stay platform agnostic as much as possible and in fact would try to do so for the projects I work on. If the same solution was implemented using Azure for example, we could use AzureAppConfigurationBuilder or AzureKeyVaultConfigBuilder. The problem is that there's no implementation of the Configuration Builder for AWS. For a simple task such as this you shouldn't have to rely on platform specific services. – Kayes Aug 24 '21 at 01:12
1

You can specify your AWS AppConfig dependency as a separate container. AWS gives you the option to set container dependency execution conditions in your Task Definition. See: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html

You could set your container dependency status to COMPLETE for the container that pulls the config files from AppConfig and then just treat the files as a dumb mount, separating the AWS dependency completely. For Example:

    "containerDefinitions": [
    {
        "name": "app-config-script",
        "image": "1234567890.dkr.ecr.SOME_REGION.amazonaws.com/app-config-script:ver",
        "essential": false,
        "mountPoints": [
            {
                "sourceVolume": "config",
                "containerPath": "/data/config/nginx",
                "readOnly": ""
            }
        ],
        "dependsOn": null,
        "repositoryCredentials": {
            "credentialsParameter": ""
        }
    },
    {
        "name": "nginx",
        "image": "nginx",
        "essential": true,
        "portMappings": [
            {
                "containerPort": "80",
                "protocol": "tcp"
            },
            {
                "containerPort": "443",
                "protocol": "tcp"
            }
        ],
        "mountPoints": [
            {
                "sourceVolume": "config",
                "containerPath": "/etc/nginx",
                "readOnly": true
            }
        ],
        "dependsOn": [
            {
                "containerName": "app-config-script",
                "condition": "COMPLETE"
            }
        ],
        "repositoryCredentials": {
            "credentialsParameter": ""
        }
    }
],

Your Entrypoint/CMD script in the bootstrap container would then be something like:

#!/bin/sh
token=$(aws appconfigdata start-configuration-session --application-identifier "${APPLICATION_ID}" --environment-identifier "${ENVIRONMENT_ID}" --configuration-profile-identifier "${CONFIGURATION_ID}" | jq -r .InitialConfigurationToken)
aws appconfigdata get-latest-configuration --configuration-token "${token}" /data/config/nginx/nginx.conf
0

Not an answer to the question but in case someone comes here looking for solutions, we had the same requirements but did not find an easy solution to deploy configuration file directly in ECS instance for the container to read. I'm sure it's possible, just would make is difficult to configure, therefore we did not see the effort worthy.

What we did:

  1. Added EnvironmentConfigBuilder as discribed in MS docs here
  2. Passed in configuration values using environment variables as described in AWS docs here.
Kayes
  • 502
  • 7
  • 18