31

I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart. How would I force a refresh of files in the pod?

I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.

Elijah Lynn
  • 12,272
  • 10
  • 61
  • 91
  • Restarting a pod will result in the loss of any local changes made from within the container, unless those changes were to files in a persistent volume. Can you provide more details on what image or S2I builder you are using and what file you are changing? – Graham Dumpleton Mar 31 '18 at 21:33

6 Answers6

27

You need to do your changes in the deployment config but not in the pod. Because OpenShift treats pods as largely immutable; changes cannot be made to a pod definition while it is running. https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/pods_and_services.html#pods

If you make some changes in deployment config and save them, pod will restart and and your changes will take effect:

oc edit dc "deploy-config-example"

If you change something in volumes or configmaps you need to delete pod for his restart:

oc delete pod "name-of-your-pod"

And pod will restart. Or better still trigger a new deployment by running:

oc rollout latest "deploy-config-example"

Using oc rollout is better because it will re-deploy all pods if you have a scaled application, and you don't need to identify each pod and delete it.

Graham Dumpleton
  • 57,726
  • 6
  • 119
  • 134
Ripper Tops
  • 402
  • 4
  • 3
  • One can delete all pods of an application with `oc delete pods -l app=...`. The `s` in `pods` is optional -- `oc delete pod -l app=...` is equivalent to the former. This relies on having the `app` metadata label and value for the pod template or specification, of course. – Armen Michaeli Feb 24 '23 at 14:20
  • Please fix this OS. I can't believe we can't do a simple restart. – Michael Jul 21 '23 at 15:12
14

You can scale deployments down (to zero) and then up again:

oc get deployments -n <your project> -o wide

oc get pods -n <your project> -o wide

oc scale --replicas=0 deployment/<your deployment> -n <your project>

oc scale --replicas=1 deployment/<your deployment> -n <your project>

watch oc get pods -n <your project> # wait until your deployment is up again
Noam Manos
  • 15,216
  • 3
  • 86
  • 85
10

If you want to do it using GUI :

  1. Login to ocp
  2. Click workloads -> Deployment Configs
  3. Find the pod you want to restart.
  4. On the right side, click on the 3 dots.
  5. Click start rollout.

If you delete your pod, or scale it to 0 and to 1 again you might lose some clients, because you are basically stopping and restarting your application. But in rollout, your existing pod waits for the new pod to get ready and then deletes itself. So I guess rollout is safer than deleting or scaling 0/1.

Ugur Konak
  • 111
  • 2
  • 4
6

Thanks Noam Manos for your solution.

I've used "Application Console" in Openshift. I've navigated to Applications - Deployment - #3 (check for your active deployment) to see my pod with up and down arrows. Currently, I've 1 pod running. So, I've clicked on down arrow to scale down to 0 pod. Then, I clicked on up arrow to scale up to 1 pod.

BNJ
  • 176
  • 2
  • 11
3

Follow the below steps

  1. login to open shift
  2. click on monitor tab
  3. select the component for which you want to restart the pod
  4. click the action drop down ( right top corner )
  5. delete the existing pod
  6. new pod automatically generated.
0

You also can go to DeploymentConfig and choose option "Start rollout" from actions.

And if nothing helps, there is also such thing as

Workloads -> ReplicationControllers

they controll replica numbers. You delete such controller, and then another such controller is created which creates your new pod.

Dmitry Bakhtiarov
  • 373
  • 1
  • 5
  • 14