-1

We are developing k8s CSI driver Currently in order to upgrade driver we delete the installed operator pods, cdrs and roles and recreate them from new version images. What is suggested way to do upgrade? Or is uninstall/install is the suggested method? I couldn't find any relevant information

We also have support of installing from OpenShift. Is there any difference regarding upgrade from OpenShift?

2 Answers2

0

You should start from this documentation:

This page describes to CSI driver developers how to deploy their driver onto a Kubernetes cluster.

Especially:

Deploying a CSI driver onto Kubernetes is highlighted in detail in Recommended Mechanism for Deploying CSI Drivers on Kubernetes.

Also, you will find there all the necessary info with an example.

Your question lacks some details regarding your use case but I strongly recommend starting from the guide I have presented you.

Please, let me know if that helps.

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
0

CSI drivers can differ, but I believe the best approach is to do rolling update of your plugin's DaemonSet. It will happen automatically once you apply the new DaemonSet configuration, e.g. newer docker image. For more details, see https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/

For example:

kubectl get -n YOUR-NAMESPACE daemonset YOUR-DAEMONSET --export -o yaml > plugin.yaml
vi plugin.yaml # Update your image tag(s)
kubectl apply -n YOUR-NAMESPACE -f plugin.yaml

A shorted way to update just the image:

kubectl set image ds/YOUR-DAEMONSET-NAME YOUR-CONTAINER-NAME=YOUR-IMAGE-URL:YOUR-TAG -n YOUR-NAMESPACE

Note: I found that I also needed to restart (kill) the pod with the external provisioner. There's probably a more elegant way to handle this, but it works in a pinch.

kubectl delete pod -n YOUR-NAMESPACE YOUR-EXTERNAL-PROVISIONER-POD
Jean Spector
  • 916
  • 9
  • 9