12

I have a deployment which includes a configMap, persistentVolumeClaim, and a service. I have changed the configMap and re-applied the deployment to my cluster. I understand that this change does not automatically restart the pod in the deployment:

configmap change doesn't reflect automatically on respective pods

Updated configMap.yaml but it's not being applied to Kubernetes pods

I know that I can kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml. But that destroys the persistent volume which has data I want to survive the restart. How can I restart the pod in a way that keeps the existing volume?

Here's what wiki.yaml looks like:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dot-wiki
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: wiki-config
data:
  config.json: |
    {
      "farm": true,
      "security_type": "friends",
      "secure_cookie": false,
      "allowed": "*"
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wiki-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wiki
  template:
    metadata:
      labels:
        app: wiki
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      initContainers:
      - name: wiki-config
        image: dobbs/farm:restrict-new-wiki
        securityContext:
          runAsUser: 0
          runAsGroup: 0
          allowPrivilegeEscalation: false
        volumeMounts:
          - name: dot-wiki
            mountPath: /home/node/.wiki
        command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
      containers:
      - name: farm
        image: dobbs/farm:restrict-new-wiki
        command: [
          "wiki", "--config", "/etc/config/config.json",
          "--admin", "bad password but memorable",
          "--cookieSecret", "any-random-string-will-do-the-trick"]
        ports:
        - containerPort: 3000
        volumeMounts:
          - name: dot-wiki
            mountPath: /home/node/.wiki
          - name: config-templates
            mountPath: /etc/config
      volumes:
      - name: dot-wiki
        persistentVolumeClaim:
          claimName: dot-wiki
      - name: config-templates
        configMap:
          name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
  name: wiki-service
spec:
  ports:
  - name: http
    targetPort: 3000
    port: 80
  selector:
    app: wiki
Eric Dobbs
  • 3,844
  • 3
  • 19
  • 16

3 Answers3

11

In addition to kubectl rollout restart deployment, there are some alternative approaches to do this:

1. Restart Pods

kubectl delete pods -l app=wiki

This causes the Pods of your Deployment to be restarted, in which case they read the updated ConfigMap.

2. Version the ConfigMap

Instead of naming your ConfigMap just wiki-config, name it wiki-config-v1. Then when you update your configuration, just create a new ConfigMap named wiki-config-v2.

Now, edit your Deployment specification to reference the wiki-config-v2 ConfigMap instead of wiki-config-v1:

apiVersion: apps/v1
kind: Deployment
# ...
      volumes:
      - name: config-templates
        configMap:
          name: wiki-config-v2

Then, reapply the Deployment:

kubectl apply -f wiki.yaml

Since the Pod template in the Deployment manifest has changed, the reapplication of the Deployment will recreate all the Pods. And the new Pods will use the new version of the ConfigMap.

As an additional advantage of this approach, if you keep the old ConfigMap (wiki-config-v1) around rather than deleting it, you can revert to a previous configuration at any time by just editing the Deployment manifest again.

This approach is described in Chapter 1 of Kubernetes Best Practices (O'Reilly, 2019).

weibeld
  • 13,643
  • 2
  • 36
  • 50
  • kubectl delete pods seems very idomatic for k8s, and also appreciate the suggestion about versioning configMaps. Thanks for those suggestions. I still feel like the kubectl rollout restart is closer to what I was looking for. – Eric Dobbs Dec 03 '19 at 00:52
  • Thanks for including "rollout restart deployment" in your answer. With that change I've switched my accepted answer to yours, especially for the clear explanations of the advantages of each option. Nice answer. – Eric Dobbs Jan 05 '20 at 03:57
5

For the specific question about restarting containers after the configuration is changed, as of kubectl v1.15 you can do this:

# apply the config changes
kubectl apply -f wiki.yaml

# restart the containers in the deployment
kubectl rollout restart deployment wiki-deployment
Eric Dobbs
  • 3,844
  • 3
  • 19
  • 16
0

You should do nothing but change your ConfigMap, and wait for the changes to be applies. The answer you have posted the link is wrong. After a ConfigMap change, it doesn't apply the changes right away, but can take time. Like 5 minutes, or something like that.

If that doesn't happen, you can report a bug about that specific version of k8s.

suren
  • 7,817
  • 1
  • 30
  • 51
  • 2
    This will cause the files visible to the container to change, but many processes read their configuration once at startup and never reload them. You need to somehow restart the pod to get it to re-read the config files. – David Maze Nov 30 '19 at 12:27
  • Well, yes, if say you have a `ConfigMap` with nginx configuration, then you have to restart nginx to pass the new configuration, but the config itself is going to be available in the container. You can simply restart nginx. No need to recreate the pod. – suren Nov 30 '19 at 15:43
  • I like the approach of building the app inside the container to notice when its configuration has changed. That's good advice. But what I'm asking for here are the kubernetes commands to restart a container when it has NOT been built that way. – Eric Dobbs Dec 01 '19 at 16:54