0

I built an image, say my_dbt_image, with a custom install of a program, say dbt. I also wrote a little binary my_dbt like:

#!/usr/bin/env bash
# define flags, e.g. '--workdir=/vol' '--volume $PWD:/vol'
docker run --rm $flags my_dbt_image dbt $@

so that when a user enters my_dbt <args> in their terminal, the script actually runs dbt <args> inside the container. Hope this makes sense.

Seems to work fine, maybe a bit slow. I figure that to speed things up, instead of running a new container every time the user enters a new command, I should reuse the same container, leveraging docker exec.

Currently, after the command is run, the container goes in stopped state (status exited). This makes sense. I'm a bit confused about the logic of docker exec. Why does the container need to be running in order to throw a new command at it?

In my situation, do you think I should:

  • stop the container after each user command is executed, and (re)start it when a new user command is entered, or
  • keep the container running?

Any comments or advice on my approach are welcome.

Brice
  • 13
  • 4
  • 1
    A container is a wrapper around a single process; when that process exits the container ends. It's not like a VM where the container has an independent long-running identity and you can launch new services inside it. For this setup, `docker run --rm` seems like a better match, or better, run the program outside a container directly on the host. – David Maze Jul 24 '22 at 15:12
  • Have you run benchmarks to see how much time you would save? – BMitch Jul 24 '22 at 16:27

0 Answers0