2

This is a request for comments and opinions. I am fairly new to Docker.

I want containers for production and development (maybe unit-test too) for a Python project. My search pointed toward the multi-stage Dockerfile (and multiple docker-compose to run them).

All of the articles, comments and examples I found on the subject start with a development image and use it as a FROM for a production build (some add a test image in between). This makes no sense to me. It involves test-file clean-up, removing dev/test tools and packages, thus keeping track of why each package was installed. Error prone.

Should we not start with the minimal 'production' setup first? Then add the necessary debug tools and development configuration for a dev image?

What's the proper (guideline?) way of doing things and why? I want to do things clean.

Thanks in advance. GG

1 Answers1

-1

I can tell you how I am doing it and why it came as a better way to me.

I am not having different images as development and production. I have a base image, built on alpine with the bare minimum binaries and dependencies. I have a simple two stage build in my Dockerfile.

For testing and then production, there is a CI/CD setup. The moment code is merged in the prod branch, a container is made building the same Dockerfile. Then after running all the test cases, the image is pushed to docker hub with the latest tag. This final image is ready for deployment now.

Since I am having the same dockerfile for development and production, with tests being run before a tag is made for release, any system dependency or things like "It works on my system but not yours" is removed. Also, having two stage build helps you create a light weight docker image with an optimized build time.

Ayush Pallav
  • 919
  • 9
  • 18