1

I've been trying to set up Redash to run on ECS. I'm rather new to ECS and Docker in general so I'm not sure if I'm missing something fundamental with what I've done so far.

So far, I've converted Redash's docker-compose file to an AWS container definition.

However, according to Redash documentation, I need to run docker-compose run --rm server create_db first to set up the tables in the Postgres container.

How do I implement this docker-compose run behavior within the ECS context?

I noticed that trying to force this behavior by adding a setup-postgres container to my container definition works but this is a hack and non-ideal.

[
  {
    "name": "postgres-setup",
    "image": "redash/redash:latest",
    "cpu": 100,
    "memory": 150,
    "links": [
      "postgres",
      "redis"
    ],
    "command": [
        "create_db"
    ],
    "environment": [
      {
        "name": "PYTHONUNBUFFERED",
        "value": "0"
      },
      {
        "name": "REDASH_LOG_LEVEL",
        "value": "INFO"
      },
      {
        "name": "REDASH_REDIS_URL",
        "value": "redis://redis:6379/0"
      },
      {
        "name": "REDASH_DATABASE_URL",
        "value": "postgresql://postgres@postgres/postgres"
      },
      {
        "name": "REDASH_COOKIE_SECRET",
        "value": "veryverysecret"
      },
      {
        "name": "REDASH_WEB_WORKERS",
        "value": "1"
      }
    ],
    "essential": false
  },
  {
    "name": "nginx",
    "image": "redash/nginx:latest",
    "essential": false,
    "cpu": 100,
    "memory": 200,
    "links": [
      "server:redash"
    ],
    "portMappings": [
      {
        "containerPort": 80,
        "hostPort": 80
      }
    ]
  },
  {
    "name": "postgres",
    "image": "postgres:9.5.6-alpine",
    "essential": true,
    "cpu": 100,
    "memory": 300,
    "mountPoints": [
      {
        "sourceVolume": "mytestvol",
        "containerPath": "/var/lib/postgresql/data"
      }
    ]
  },
  {
    "name": "redis",
    "image": "redis:3.0-alpine",
    "essential": true,
    "cpu": 100,
    "memory": 400
  },
  {
    "name": "server",
    "image": "redash/redash:latest",
    "cpu": 100,
    "memory": 400,
    "links": [
      "postgres",
      "redis"
    ],
    "command": [
        "server"
    ],
    "environment": [
      {
        "name": "PYTHONUNBUFFERED",
        "value": "0"
      },
      {
        "name": "REDASH_LOG_LEVEL",
        "value": "INFO"
      },
      {
        "name": "REDASH_REDIS_URL",
        "value": "redis://redis:6379/0"
      },
      {
        "name": "REDASH_DATABASE_URL",
        "value": "postgresql://postgres@postgres/postgres"
      },
      {
        "name": "REDASH_COOKIE_SECRET",
        "value": "veryverysecret"
      },
      {
        "name": "REDASH_WEB_WORKERS",
        "value": "1"
      }
    ],
    "essential": false,
    "portMappings": [
      {
        "containerPort": 5000,
        "hostPort": 5000
      }
    ]
  },
  {
    "name": "worker",
    "image": "redash/redash:latest",
    "cpu": 100,
    "memory": 400,
    "links": [
      "postgres",
      "redis"
    ],
    "command": [
      "scheduler"
    ],
    "environment": [
      {
        "name": "PYTHONUNBUFFERED",
        "value": "0"
      },
      {
        "name": "REDASH_LOG_LEVEL",
        "value": "INFO"
      },
      {
        "name": "REDASH_REDIS_URL",
        "value": "redis://redis:6379/0"
      },
      {
        "name": "REDASH_DATABASE_URL",
        "value": "postgresql://postgres@postgres/postgres"
      },
      {
        "name": "QUEUES",
        "value": "queries,scheduled_queries,celery"
      },
      {
        "name": "WORKERS_COUNT",
        "value": "1"
      }
    ],
    "essential": false
  }
]

I know running Postgres in a container like this isn't ideal and may move it into RDS later on. If I decide to do that, what would the database initialization step look like? Create an ECS instance, download the docker-compose.yml file and run docker-compose run --rm server create_db to do the one-off setup before starting the ECS service?

1 Answers1

0

You don't.

You're looking at these a couple of wrong ways here:

1) Docker Compose is a development tool, not production. You can use a compose file in Production - but this will be used via Docker Swarm, and some features are only for docker-compose while others are only for Docker Swarm.

2) With that said - ECS is an orchestrator, which is what Docker Swarm is (Which is what you want) - So to be running Docker Swarm while on ECS is entirely moot.

What you want to do here is created an ECS Task Definition; this is essentially the ECS version of a compose file. You setup the image, port bindings, volumes, container links, etc.

So you have a choice here to make on whether you want to run in ECS, or whether you want to setup a Docker Swarm.

Best of luck!

TJ Biddle
  • 6,024
  • 6
  • 40
  • 47
  • I'll need to read and experiment some more for these concepts to really sink in but I have already created a Task Definition (implied when I mentioned I converted the compose file to a container definition). The issue I was facing was trying to mimic the init command `docker-compose run --rm server create_db` in a Task Definition. I think it would be ideal for me to init the db when I build the `postgres` image but the issue is that Redash is using the `server` docker service to initialize the `postgres` image. –  Sep 11 '18 at 09:25