I built an image, say my_dbt_image
, with a custom install of a program, say dbt
. I also wrote a little binary my_dbt
like:
#!/usr/bin/env bash
# define flags, e.g. '--workdir=/vol' '--volume $PWD:/vol'
docker run --rm $flags my_dbt_image dbt $@
so that when a user enters my_dbt <args>
in their terminal, the script actually runs dbt <args>
inside the container. Hope this makes sense.
Seems to work fine, maybe a bit slow. I figure that to speed things up, instead of running a new container every time the user enters a new command, I should reuse the same container, leveraging docker exec
.
Currently, after the command is run, the container goes in stopped state (status exited
). This makes sense. I'm a bit confused about the logic of docker exec
. Why does the container need to be running in order to throw a new command at it?
In my situation, do you think I should:
- stop the container after each user command is executed, and (re)start it when a new user command is entered, or
- keep the container running?
Any comments or advice on my approach are welcome.