I need to deploy a container that receives a specific running args, say a target ip to run a bash script against it.
I can get the container up and running on a single docker host and everything works just fine.
Since the load of the script is considerable and it takes some time to execute, it would be interresting to schedule say 50 replicas on 5 different hosts( each one with a different target ip) and in that way the docker swarm seems the straightforward option
Say the script is
test.sh
#!/bin/bash
TARGET_IP=$1
FOO=(nmap -p- -oG - $TARGET_IP)
echo "waiting 1s for every up port"
x=$( wichever cut/awk/grep to get the number of open ports)
sleep ${x}s
echo "end"
Dockerfile
FROM alpine:latest
RUN apk update
RUN apk add bash nmap
docker build high-load-scrip
(comand for a single host) docker run --rm -it high-load-scrip ./test.sh 172.0.0.1
target hosts are 172.0.0.1 to 172.0.0.100
please excuse any obvious mistakes on the code since I'm using my phone =)
my first idea was to make a simple web server that you get a GET/POST, run the script and kill the container so that this target doesn't get targeted again. or have a zombie script that wait until a ENV is defined, call the script and kill the container, something like:
#add to Dockerfile
ENTRYPOINT /zombie.sh
zombie.sh
while true; do
if sintax for checking if a variable exists
/test.sh
exit 0
else
sleep 5s
fi
done
How can I implement this using swarm to leverage the workload distribution?