0

Our Python application is using docker container design. To release a fix in a .py file, we generate a new image with all application. This way to deploy the fix is not optimal adn to risky.

We want to release in production, only the impacted component or file for every release (following Open/closed principles). And so to keep under control all the operational risk related to the releasing, to reduce the time for QA test ect...

I am beginner in the docker design.

This agile mode of deployment, is it possible with the docker container design ?

Thank you.

Tibo
  • 61
  • 4
  • what do you expected the answer to be? yes it is possible – LinPy Dec 06 '19 at 11:49
  • Could you tell me how would you do it please ? I've seen the multi stage build or the command "docker cp" but both approaches are not optimal. Thanks. – Tibo Dec 06 '19 at 11:54
  • maybe trying docker volumes ? and without seeing what do you run in the docker it is impossible to give the solution – LinPy Dec 06 '19 at 12:10
  • What specific problem are you encountering? IME it's standard practice to build a new complete image for every change; using volumes or `docker cp` to try to inject code into a container would lead to an unreproducible runtime environment. – David Maze Dec 06 '19 at 12:12
  • well, docker volume is also a solution but it would means we parially use the docker approach . Other approach: copy the files in the image (docker cp) then recreate the new one (docker commit ) ? (https://www.tjohearn.com/2018/02/01/hotfixing-docker-containers/) – Tibo Dec 06 '19 at 12:16

0 Answers0