50

I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).

I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).

The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:

sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"

Ideally, something close to this:

sudo docker run -i -t image /bin/bash python myscript.py

Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):

open('newfile.txt','w').write('Created new file with text\n')

When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:

root@66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root@66bddaa892ed#

In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.

will.fiset
  • 1,524
  • 1
  • 19
  • 29

6 Answers6

58

My way of doing it is slightly different with some advantages. It is actually multi-session server rather than script but could be even more usable in some scenarios:

# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>

# Now start it.
docker start new-container

# Now attach bash session.
docker exec -it new-container bash

Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.

BTW when you detach last 'exec' session your container is still running so it can perform operations in background

Roman Nikitchenko
  • 12,800
  • 7
  • 74
  • 110
10

You can run a docker image, perform a script and have an interactive session with a single command:

sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"

The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.

There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).

Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there: sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"

Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows: sudo docker run -it image bash -c "python myscript.py; bash"

Olli
  • 1,621
  • 15
  • 18
  • Sorry for this late answer. I had tried Roman's answer, but the image will immediately stop after the ```docker start``` unless the image has been created with a CMD service that is never ending (like e.g. an interactive bash). James' nice answer works because of the ```local("/bin/bash")```. In my case, I was looking for a bash script instead of a Python script, and the same works, if I append a ```bash``` command at the end of the shell script. Nice idea. – Olli Aug 09 '16 at 11:32
  • Any idea how to do this from the Dockerfile? – Alexandros Ioannou Oct 27 '16 at 09:24
  • 1
    @AlexandrosIoannou: there are several ways to do this in a Dockerfile. 1) CMD["bash", "-c", "; bash"] will define a default command in the Dockerfile. With that, you can run 'sudo docker run -it ' without specifying the command. 2) Another way would be to define an ENTRYPOINT in a similar way. An ENTRYPOINT will not be overridden by a command appended to the docker run command (but can be overridden with a --entrypoint option). 3) A third option would be to run the Script in a RUN command, which would bake in the script execution into the docker image. – Olli Oct 28 '16 at 08:47
  • This didn't work for me. `docker run -it image2 bash -c "python script.py; bash"` exits and does not enter a bash shell. – Max888 Mar 24 '21 at 22:36
5

Why not this?

docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host

Or if you want it to stay on the container

docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy

Hope it was helpful.

Regan
  • 8,231
  • 5
  • 23
  • 23
3

I think this is what you mean.

Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):

Output:

$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root@66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root@66bddaa892ed:/usr/src/python# exit
Goodbye!

Dockerfile:

# Test Docker Image

FROM python:2

ADD myscript.py /usr/bin/myscript

RUN pip install fabric

CMD ["/usr/bin/myscript"]

myscript.py:

#!/usr/bin/env python


from __future__ import print_function


from fabric.api import local


with open("hello.txt", "w") as f:
    f.write("Hello World!")


print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
James Mills
  • 18,669
  • 3
  • 49
  • 62
  • 1
    This is interesting. Is it possible to do without Python as a base? I edited my post to give an example of what i'm looking for thanks! – will.fiset Aug 03 '14 at 12:03
  • @Caker I'm not really sure what you're asking. Your question states executing ``myscript.py`` and I assume once this is run you want to inspect the container by spawning ``bash`` as a subprocess? – James Mills Aug 04 '14 at 01:30
1

Sometimes you cannot simply do $ docker run -it <image> as it might run the entrypoint. In that case you can do following (say for image python:3.9-slim):

$ docker run -itd python:3.9-slim
    b6b54c042af2085b0e619c5159fd776875af44351d22096b0c574ac1a7798318

$ docker ps
    CONTAINER ID        IMAGE                      COMMAND                  NAMES
    b6b54c042af2        python:3.9-slim            "python3"                sleepy_davinci

$ docker exec -it b6b54c042af2 bash

I did docker ps just to show it's running and the corresponding container id. But you can also use the full container id (b6b54c042af2085b0e619c5159fd776875af44351d22096b0c574ac1a7798318) returned from docker run -itd ...

Kuldeep Jain
  • 8,409
  • 8
  • 48
  • 73
0

Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.

I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files

Create one text script

sudo nano /usr/local/bin/dock-folder

Add this script as its content

#!/bin/bash

echo "IMAGE = $1"

## image name is the first param
IMAGE="$1"

## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"

# remove the image from this dir, if exists
## rm                                      remove container command
## pwd | md5                               get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)"                   create a unique name for the container based in the folder and image
## --force                                 force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
        echo "## removing previous container ${CONTAINER}"
        docker rm "${CONTAINER}" --force
fi

# create one special container for this folder based in the python image and let this folder mapped
## -it                                     interactive mode
## pwd | md5                               get the unique code for the current folder
## --name="${CONTAINER}"                   create one container with unique name based in the current folder and image
## -v "$(pwd)":/data                       create ad shared volume mapping the current folder to the /data inside your container
## -w /data                                define the /data as the working dir of your container
## -p 80:80                                some port mapping between the container and host ( not required )
## pyt#hon                                  name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"

# start the container
docker start "${CONTAINER}"

# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash

# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
        echo "## removing container ${CONTAINER}"
        docker rm "${CONTAINER}" --force
fi

Add execution permission

sudo chmod +x /usr/local/bin/dock-folder 

Then you can call the script into your project folder calling:

# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python

# destroy if the container already exists and replace it
dock-folder python --replace

# destroy the container after closing the interactive mode
dock-folder python --remove

This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.

Using this strategy, you can quickly test your project in a container and interact with the container to debug it.

One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.

So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.

Thiago Mata
  • 2,825
  • 33
  • 32