3

So I did update the manifest and replaced apiVersion: extensions/v1beta1 to apiVersion: apps/v1

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secretmanager
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: secretmanager
  template:
    metadata:
      labels:
        app: secretmanager
    spec:
    ...

I then applied the change

k apply -f deployment.yaml

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/secretmanager configured

I also tried

k replace --force -f deployment.yaml

That recreated the POD (downtime :( ) but still if you try to output the yaml config of the deployment I see the old value

k get deployments -n kube-system secretmanager -o yaml 

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment",
      "metadata":{"annotations":{},"name":"secretmanager","namespace":"kube-system"}....}
  creationTimestamp: "2020-08-21T21:43:21Z"
  generation: 2
  name: secretmanager
  namespace: kube-system
  resourceVersion: "99352965"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/secretmanager
  uid: 3d49aeb5-08a0-47c8-aac8-78da98d4c342
spec:

So I still see this apiVersion: extensions/v1beta1

What I am doing wrong?

I am preparing eks kubernetes v1.15 to be migrated over to v1.16

Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
DmitrySemenov
  • 9,204
  • 15
  • 76
  • 121

1 Answers1

4

The Deployment exists in multiple apiGroups, so it is ambiguous. Try to specify e.g. apps/v1 with:

kubectl get deployments.v1.apps

and you should see your Deployment but with apps/v1 apiGroup.

Jonas
  • 121,568
  • 97
  • 310
  • 388
  • So does that mean that I should not modify the manifest and just run the cluster upgrade routine? – DmitrySemenov Aug 21 '20 at 22:11
  • @DmitrySemenov, yes, that is true. – Jonas Aug 21 '20 at 22:12
  • Thank you @Jonas. So if existing ingress resource has apiVersion: extensions/v1beta1, then do I update it prior to the cluster upgrade? So what happens if I don't update manifest and update the cluster. Will that resource automatically switch to **apiVersion: extensions/v1beta1** or it will fail and I have to delete/create the manifest? – DmitrySemenov Aug 21 '20 at 22:18
  • @DmitrySemenov this was for `Deployment` that existed in multiple apiGroups. Other apis may not exists in multiple apiGroups and may need migration. Look up the documentation for the version you are upgrading to e.g. https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/ – Jonas Aug 21 '20 at 22:24
  • See notable changes, e.g. "spec.selector is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades" – Jonas Aug 21 '20 at 22:26
  • Yes I read this document, what I am confused about is how do I migrate? I could not do `kubectl edit` to change the apiVersion, so I have to modify the manifest. Is it enough to modify YAML/heml template and then just run `kubectl apply -f manifest.yaml?` Will that be considered "done" for the migration? Or that should be done **after** the cluster upgrade. Sorry for dumb questions. – DmitrySemenov Aug 21 '20 at 22:29
  • For `Deployment` it is mostly changed default values - so it does not affect what you have deployed. But you may add `selector` since that is now _required_ is noted in the doc. That field exists in both apiGroups, but is mandatory in `apps/v1`. – Jonas Aug 21 '20 at 22:39
  • Also `kubectl convert` command for converting to new apiGroup is noted in that document. – Jonas Aug 21 '20 at 22:57
  • Yes I saw that - it does go through files and updates the content, my question is do I need to run kubectl apply -f for all changes manifests and helm packages prior to upgrade or not. – DmitrySemenov Aug 21 '20 at 23:07