I'm running into an issue where it seems I can only run a python command in either the dockerfile or Kubernetes. Right now I have two python scripts, the first script setting up keys and tokens so the second script can run properly.
My dockerfile looks something like this:
FROM python:3.8.0-alpine
WORKDIR /code
COPY script1.py .
COPY script2.py .
# Install python libraries
RUN pip install --upgrade pip
RUN apk add build-base
RUN apk add linux-headers
RUN pip install -r requirements.txt
CMD [ "python", "-u", "script2.py"]
and my Kubernetes yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: script2
labels:
app: script2-app
spec:
selector:
matchLabels:
app: script2-app
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: script2-app
spec:
containers:
- name: script2-app
image: script2:v1.0.1
ports:
- containerPort: 5000
env:
- name: KEYS_AND_TOKENS
valueFrom:
secretKeyRef:
name: my_secret
key: KEYS_AND_TOKENS
command:
- "python"
- "script1.py"
The issue starts with the 'command' portion in the yaml file. Without it, Kubernetes will run the container as usual. (the container can still run without the keys and tokens. It will just log that some functions failed to run then move on.) However, when I include the 'command' portion, script1 will run and successfully set up the keys. But once script1 finishes, nothing else happens. The deployment continues to run but script2 never starts.
The reason I am doing it this way is because script2 may need to restart on occasion due to internet connection failures causing it to crash. Sence all script1 is doing is setting up keys and tokens, it only needs to run once, then things will be set up for as long as the pod lives. I don't want to verify the keys and tokens every time script2 restarts. This is why the two scripts are separate and why I'm only running script1 in startup.
Any help would be much appreciated!