3

I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.

If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.

apiVersion: v1
kind: ReplicationController
metadata:
  name: multiverse
spec:
  replicas: 3
  template:
    spec:
      containers:
      - env:
        - name: INSTANCE_ID
          value: $(replicaID)

I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.

Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?

  • For the "without editing the script" part, are you willing to hijack the `command:` to turn it into `command: ["/bin/bash", "-c", "...; exec /dockerfile-entrypoint.sh"]` so it still runs their script unmodified but runs your script _first_? – mdaniel Sep 18 '18 at 22:51
  • Theoretically that could work. Seems a bit hacky, to be honest. I'll do that for now unless there's a real solution for it. – Stumblinbear Sep 18 '18 at 23:01
  • To the best of my knowledge, there is no "executable" actions one can take from a manifest; you can reach into some metadata fields, but none of them contain the thing you're looking for (again, AFAIK) – mdaniel Sep 19 '18 at 02:44

1 Answers1

1

There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.

You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.

In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:

Create the init.yaml file with the content:

apiVersion: v1
kind: Pod
metadata:
  name: init-test
spec:
  containers:
  - name: init-test
    image: ubuntu
    args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
    volumeMounts:
    - name: config-data
      mountPath: /data
  initContainers:
  - name: init-init 
    image: busybox
    command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
    volumeMounts:
    - name: config-data
      mountPath: /data
  volumes:
  - name: config-data
    emptyDir: {}

Create the pod using following command:

kubectl create -f init.yaml

Check if Pod initialization is done and is Running:

kubectl get pod init-test

Check the logs to see the results of this example configuration:

$ kubectl logs init-test
init-test
VAS
  • 8,538
  • 1
  • 28
  • 39