3

I'm new to docker swarm and am able to deploy my services on various nodes, however, the environment variables that are exported from a dockerfile ENTRYPOINT script are not setting for tasks that are deployed in the docker swarm cluster.

Setup

  • docker version 18.09.1, build 4c52b90
  • docker-compose version 1.23.2, build 1110ad01
  • Django 2.1.5
  • PosgresSQL 10

Trying to do a one-off command inside a django task, using docker exec -t CONTAINER_ID sh to get into the container, then executing python manage.py migrate, I get the following error:

Error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "manage.py", line 38, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 211, in fetch_command
    settings.INSTALLED_APPS
  File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 57, in __getattr__
    self._setup(name)
  File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 44, in _setup
    self._wrapped = Settings(settings_module)
  File "/usr/local/lib/python3.6/site-packages/django/conf/__init__.py", line 107, in __init__
    mod = importlib.import_module(self.SETTINGS_MODULE)
  File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/app/config/settings/production.py", line 15, in <module>
    DATABASES['default'] = env.db('DATABASE_URL')  # noqa F405
  File "/usr/local/lib/python3.6/site-packages/environ/environ.py", line 202, in db_url
    return self.db_url_config(self.get_value(var, default=default), engine=engine)
  File "/usr/local/lib/python3.6/site-packages/environ/environ.py", line 275, in get_value
    raise ImproperlyConfigured(error_msg)
django.core.exceptions.ImproperlyConfigured: Set the DATABASE_URL environment variable

So, the DATABASE_URL is not set as an environment variable inside my docker container. As stated before, this is exported from an ENTRYPOINT script which is invoked in the dockerfile.

Dockerfile

FROM python:3.6-alpine

ENV PYTHONUNBUFFERED 1

RUN apk update \
  # psycopg2 dependencies
  && apk add --virtual build-deps gcc python3-dev g++ musl-dev \
  && apk add postgresql-dev \
  # Pillow dependencies
  && apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
  # CFFI dependencies
  && apk add libffi-dev py-cffi \
  # Translations dependencies
  && apk add gettext \
  # https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell

# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install -r /requirements/production.txt \
    && rm -rf /requirements

COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
RUN chown django /entrypoint

COPY ./compose/production/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
RUN chown django /start

COPY ./compose/production/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r//' /start-celeryworker
RUN chmod +x /start-celeryworker
RUN chown django /start-celeryworker

COPY ./compose/production/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r//' /start-celerybeat
RUN chmod +x /start-celerybeat
RUN chown django /start-celerybeat

COPY ./compose/production/django/celery/flower/start /start-flower
RUN sed -i 's/\r//' /start-flower
RUN chmod +x /start-flower

COPY . /app

RUN chown -R django /app

USER django

WORKDIR /app

ENTRYPOINT ["/entrypoint"]

ENTRYPOINT Script

#!/bin/sh

set -o errexit
set -o pipefail
set -o nounset


# N.B. If only .env files supported variable expansion...
export CELERY_BROKER_URL="${REDIS_URL}"

if [ -z "${POSTGRES_USER}" ]; then
    base_postgres_image_default_user='postgres'
    export POSTGRES_USER="${base_postgres_image_default_user}"
fi
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"

postgres_ready() {
python << END
import sys

import psycopg2

try:
    psycopg2.connect(
        dbname="${POSTGRES_DB}",
        user="${POSTGRES_USER}",
        password="${POSTGRES_PASSWORD}",
        host="${POSTGRES_HOST}",
        port="${POSTGRES_PORT}",
    )
except psycopg2.OperationalError:
    sys.exit(-1)
sys.exit(0)

END
}
until postgres_ready; do
  >&2 echo 'Waiting for PostgreSQL to become available...'
  sleep 1
done
>&2 echo 'PostgreSQL is available'

exec "$@"

This was taken from pydanny's django-cookie-cutter project. Everything works using a normal non-swarm setup: docker-compose -f production.yml build and docker-compose -f production.yml up for single instance production deployment.

Last here is what my docker-compose file looks like for the swarm:

Docker-compose.yml

version: '3.6'

volumes:
  production_postgres_data: {}
  production_postgres_data_backups: {}
  production_caddy: {}
  node-modules:

networks:
  webnet:
    driver: overlay
    attachable: true

services:
  django: &django
    image: registry:image
    depends_on:
      - postgres
      - redis
    env_file:
      - PATH to .env

    command: /start
    deploy:
      mode: replicated
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
    networks:
      -  webnet

  postgres:
    image: registry:image
    volumes:
      - production_postgres_data:/var/lib/postgresql/data
      - production_postgres_data_backups:/backups
    env_file:
      - PATH to .env
    deploy:
      restart_policy:
        condition: on-failure
        delay: 5s
      placement:
        constraints:
          - node.role == manager
    networks:
      - webnet

  frontend:
    image: registry:image
    command: /start
    volumes:
      - node-modules:/app/node_modules
    ports:
      - "3000:3000"
    deploy:
      mode: replicated
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
    networks:
      -  webnet

  caddy:
    image: registry:image
    depends_on:
      - django
      - frontend
    volumes:
      - production_caddy:/root/.caddy
    env_file:
       - PATH to .env
    ports:
      - "0.0.0.0:80:80"
      - "0.0.0.0:443:443"
    deploy:
      placement:    
        constraints:
          - node.role == manager
    networks:
      -  webnet

  redis:
    image: redis:3.2
    deploy:
      mode: replicated
      replicas: 2
    networks:
      -  webnet

I'm not sure why exporting the environment variables from the entrypoint script are not setting when the tasks are deployed to nodes using docker stack deploy --with-registry-auth -c production.yml my_swarm.

Any help would be appreciated with this or an alternative solution to setting env variables. I could not find documentation that links dockerfile entrypoint scripts to docker swarm tasks / services.

EDIT:

I'm assuming I have to somehow utilize https://docs.docker.com/engine/swarm/secrets/, but would like to be able to keep the entrypoint script.

EDIT 2: Found the resource, need to adapt my process. https://docs.docker.com/engine/swarm/secrets/#build-support-for-docker-secrets-into-your-images

EDIT 3: After more inspection, all other environment variables, except the ones in the entrypoint script carried over to each task. I was able to get into a django container using docker exec and run the same commands to create the DATABASE_URL, as well as, the CELERY_BROKER_URL, as shown in the script. However, still do not know why entrypoint scripts can't be used to create the environment variables.

Dan R
  • 41
  • 1
  • 6
  • If you set a variable with an entrypoint, it will not be visible with a `docker exec` since the `docker exec` creates a new shell that is not a child of your entrypoint. – BMitch Feb 11 '19 at 22:47
  • Ah, got it. So, just to clarify, the variables are still being set when the task spins up in the node, and can be utilized by the task. However, it is not visible to me when using `docker exec` to do one-off commands and I have to set them again for that instance. – Dan R Feb 12 '19 at 19:23
  • I wouldn't even phrase it as "task", it's more the child processes to the entrypoint script that have access to the variables. The task is creating the container which is running the entrypoint script, so task is a few levels too high. – BMitch Feb 12 '19 at 19:26
  • Thanks for the clarification @BMitch! Finally grasping the concept. – Dan R Feb 12 '19 at 22:51

1 Answers1

1

This is resolved thanks to bmitch, see comments. For anyone else that runs into this. Entrypoint scripts work fine when a task creates cotnainer / child process. So, any variables set in them will be available to the containers / child process.

The non-issue was that when I used docker exec to do one-off commands inside a specific container / child process, it creates a new shell which does not call the entrypoint script, therefore does not have access to the variables set in the entrypoint. However, you can set them again in the shell and the child process will have access to them. e.g. database migrations, etc.

Dan R
  • 41
  • 1
  • 6
  • Did you find a simple solution how to easily set the variables in the shell started by "docker exec", without doing it manually each time? I have a bunch of variables over several env files which are automatically loaded with docker-compose. So "docker-compose run --rm django python manage.py migrate" is easy. With a docker stack, I have still no clue how to have all these variables after "docker exec". Sadly, something as "docker stack run" does not exist. – mcrot Nov 24 '21 at 15:35