I've been trying to set up Redash to run on ECS. I'm rather new to ECS and Docker in general so I'm not sure if I'm missing something fundamental with what I've done so far.
So far, I've converted Redash's docker-compose file to an AWS container definition.
However, according to Redash documentation, I need to run docker-compose run --rm server create_db
first to set up the tables in the Postgres container.
How do I implement this docker-compose run
behavior within the ECS context?
I noticed that trying to force this behavior by adding a setup-postgres
container to my container definition works but this is a hack and non-ideal.
[
{
"name": "postgres-setup",
"image": "redash/redash:latest",
"cpu": 100,
"memory": 150,
"links": [
"postgres",
"redis"
],
"command": [
"create_db"
],
"environment": [
{
"name": "PYTHONUNBUFFERED",
"value": "0"
},
{
"name": "REDASH_LOG_LEVEL",
"value": "INFO"
},
{
"name": "REDASH_REDIS_URL",
"value": "redis://redis:6379/0"
},
{
"name": "REDASH_DATABASE_URL",
"value": "postgresql://postgres@postgres/postgres"
},
{
"name": "REDASH_COOKIE_SECRET",
"value": "veryverysecret"
},
{
"name": "REDASH_WEB_WORKERS",
"value": "1"
}
],
"essential": false
},
{
"name": "nginx",
"image": "redash/nginx:latest",
"essential": false,
"cpu": 100,
"memory": 200,
"links": [
"server:redash"
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
},
{
"name": "postgres",
"image": "postgres:9.5.6-alpine",
"essential": true,
"cpu": 100,
"memory": 300,
"mountPoints": [
{
"sourceVolume": "mytestvol",
"containerPath": "/var/lib/postgresql/data"
}
]
},
{
"name": "redis",
"image": "redis:3.0-alpine",
"essential": true,
"cpu": 100,
"memory": 400
},
{
"name": "server",
"image": "redash/redash:latest",
"cpu": 100,
"memory": 400,
"links": [
"postgres",
"redis"
],
"command": [
"server"
],
"environment": [
{
"name": "PYTHONUNBUFFERED",
"value": "0"
},
{
"name": "REDASH_LOG_LEVEL",
"value": "INFO"
},
{
"name": "REDASH_REDIS_URL",
"value": "redis://redis:6379/0"
},
{
"name": "REDASH_DATABASE_URL",
"value": "postgresql://postgres@postgres/postgres"
},
{
"name": "REDASH_COOKIE_SECRET",
"value": "veryverysecret"
},
{
"name": "REDASH_WEB_WORKERS",
"value": "1"
}
],
"essential": false,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
]
},
{
"name": "worker",
"image": "redash/redash:latest",
"cpu": 100,
"memory": 400,
"links": [
"postgres",
"redis"
],
"command": [
"scheduler"
],
"environment": [
{
"name": "PYTHONUNBUFFERED",
"value": "0"
},
{
"name": "REDASH_LOG_LEVEL",
"value": "INFO"
},
{
"name": "REDASH_REDIS_URL",
"value": "redis://redis:6379/0"
},
{
"name": "REDASH_DATABASE_URL",
"value": "postgresql://postgres@postgres/postgres"
},
{
"name": "QUEUES",
"value": "queries,scheduled_queries,celery"
},
{
"name": "WORKERS_COUNT",
"value": "1"
}
],
"essential": false
}
]
I know running Postgres in a container like this isn't ideal and may move it into RDS later on. If I decide to do that, what would the database initialization step look like? Create an ECS instance, download the docker-compose.yml
file and run docker-compose run --rm server create_db
to do the one-off setup before starting the ECS service?