3

My Environment

docker 17.12-ce
python 3.6.3
django 1.10.8

I have a django application that I want to containerise.

Trying to maintain best practice I have followed the advice to split the settings.py file into a base file and then a file per stage

so my base.py file where it loads the secret settings looks like this

# Settings imported from a json file
with open(os.environ.get('SECRET_CONFIG')) as f:
 configs = json.loads(f.read())
def get_secret(setting, configs=configs):
 try:
     val = configs[setting]
     if val == 'True':
         val = True
     elif val == 'False':
         val = False
     return val
 except KeyError:
     error_msg = "ImproperlyConfigured: Set {0} environment      variable".format(setting)
     raise ImproperlyConfigured(error_msg)

And it gets the file path from the SECRET_CONFIG environment variable.

This works well when running the application locally without docker.

I have created a dockerfile that uses the python3 onbuild image.

My Dockerfile looks like this

# Dockerfile
# FROM directive instructing base image to build upon
FROM python:3.6.4-onbuild

MAINTAINER Lance Haig

RUN mkdir media static logs
VOLUME ["$WORKDIR/logs/"]

# COPY startup script into known file location in container
COPY docker-entrypoint.sh /docker-entrypoint.sh

# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000

# CMD specifcies the command to execute to start the server running.
CMD ["/docker-entrypoint.sh"]
# done!

The dockder-entrypoint.sh file looks like this

#!/bin/bash
python manage.py migrate                  # Apply database migrations
python manage.py collectstatic --noinput  # Collect static files

# Prepare log files and start outputting logs to stdout
touch /usr/src/app/logs/gunicorn.log
touch /usr/src/app/logs/access.log
tail -n 0 -f /usr/src/app/logs/*.log &

export DJANGO_SETTINGS_MODULE=django-app.settings.development

# Start Gunicorn processes
echo Starting Gunicorn.
# exec gunicorn django-app.wsgi:application --bind 0.0.0.0:8000 --workers 3
exec gunicorn django-app.wsgi:application \
    --name sandbox_django \
    --bind 0.0.0.0:8000 \
    --workers 3 \
    --log-level=info \
    --log-file=/usr/src/app/logs/gunicorn.log \
    --access-logfile=/usr/src/app/logs/access.log \
    "$@"

I have tried setting the environment variable SECRET_CONFIG when I start the container using this command

docker run -e SECRET_CONFIG=/home/stokvis/dev/app/secrets.json --name django-app-test -it django-app:latest

but it seems that docker will not want to load the variable.

is there a better way to provide the secrets to an image if it is to be run on a docker host or a kubernetes cluster?

Have I missed something basic?

Lance Haig
  • 332
  • 4
  • 13
  • I may have a mistake, but, how do you copy your project files into the docker container? Are you sure that the path of the project is the same in your docker container as it is in your local environment (/home/stokvis/dev/app/)? You might need to mount your secrets.json file to the docker project in your Dockerfile as you do with logs folder. – Emin Mastizada Feb 01 '18 at 00:55
  • You are correct I could mount the secrets file into the container, The challenge is that I ma trying to keep anything that needs to be kept secret out of the container. I will keep investigating this – Lance Haig Feb 02 '18 at 16:50

1 Answers1

-2

I have decided to go with using kubernetes / docker secrets to provide these solutions.

I used a base settings file and then used specific ones for development and production that are loaded as part of the environment variables within the system.

as an example the SECRET_KEY setting in the base.py looks like this

SECRET_KEY = os.environ.get('SECRET_KEY')

Then I use the following setting in the kubernetes deployment to call the settign out of the secret.

- name: SECRET_KEY
  valueFrom:
    secretKeyRef:
      name: sandbox-app-secret
      key: SECRET_KEY
Lance Haig
  • 332
  • 4
  • 13
  • 2
    current best practices advise against doing this exactly. secrets managed through environment variables in docker are easily viewed and should not be considered secure. – Christopher Hunter Mar 04 '20 at 00:05
  • Okay, so this answer isn't secure. What would you propose @ChristopherHunter? Would you be willing to answer the question yourself? – Kurt Mar 12 '21 at 01:50
  • 2
    @Rakaim The solution I've used is probably more complex than can entirely be described here, but essentially: Add all secrets to vault and setup a python connection object that can retrieve secrets. Replace all `os.environ.get` or similar calls in your `settings.py` file with a method that fetches that secret from Vault instead. The initial Vault connection credentials can be stored as a docker swarm secret, or in Jenkins, or any other system that can actually do encrypted secret storage, and won't expose it in logs etc. – Christopher Hunter Mar 13 '21 at 02:19
  • Nice. Thank you! – Kurt Mar 24 '21 at 21:08
  • 1
    For anyone finding this later, the comment above is incorrect. Using credentials in env vars in a Dockerfile is very bad. However that's not what is happening here. – coderanger Jul 29 '21 at 21:52
  • @coderanger the point was having credentials set as environment variables at all is almost as bad as having them in the Dockerfile. They can be exposed in logs, etc and are easy to reveal – Christopher Hunter Apr 20 '22 at 07:46