8

After fixing the problem from this topic Can't use Google Cloud Kubernetes substitutions (yaml files are all there, to not copy-paste them once again) I got a new problem. Making a new topic because there is the correct answer for the previous one.

Step #2: Running: kubectl apply -f deployment.yaml
Step #2: Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Step #2: The Deployment "myproject" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"myproject", "run":"myproject"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

I've checked similar issues but hasn't been able to find anything related.

Also, is that possible that this error related to upgrading App Engine -> Docker -> Kubernetes? I created valid configuration on each step. Maybe there are some things that were created and immutable now? What should I do in this case?

One more note, maybe that matters, it says "kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply" (you can see above), but executing

kubectl create deployment myproject --image=gcr.io/myproject/myproject

gives me this

Error from server (AlreadyExists): deployments.apps "myproject" already exists

which is actually expected, but, at the same time, controversial with warning above (at least from my prospective)

Any idea?

Output of kubectl version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}



Current YAML file:

steps:
  - name: 'gcr.io/cloud-builders/docker'
    entrypoint: 'bash'
    args: [
      '-c',
      'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
    ]
  - name: 'gcr.io/cloud-builders/docker'
    args: [
      'build',
      '-t',
      'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
      '-t',
      'gcr.io/$PROJECT_ID/myproject:latest',
      '.'
    ]
  - name: 'gcr.io/cloud-builders/kubectl'
    args: [ 'apply', '-f', 'deployment.yaml' ]
    env:
      - 'CLOUDSDK_COMPUTE_ZONE=<region>'
      - 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
  - name: 'gcr.io/cloud-builders/kubectl'
    args: [
      'set',
      'image',
      'deployment',
      'myproject',
      'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
    ]
    env:
      - 'CLOUDSDK_COMPUTE_ZONE=<region>'
      - 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
      - 'DB_PORT=5432'
      - 'DB_SCHEMA=public'
      - 'TYPEORM_CONNECTION=postgres'
      - 'FE=myproject'
      - 'V=1'
      - 'CLEAR_DB=true'
      - 'BUCKET_NAME=myproject'
      - 'BUCKET_TYPE=google'
      - 'KMS_KEY_NAME=storagekey'
timeout: 1600s
images:
  - 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
  - 'gcr.io/$PROJECT_ID/myproject:latest

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myproject
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myproject
  template:
    metadata:
      labels:
        app: myproject
    spec:
      containers:
        - name: myproject
          image: gcr.io/myproject/github.com/weekendman/{{repo name here}}:latest
          ports:
            - containerPort: 80
WeekendMan
  • 561
  • 1
  • 10
  • 25
  • Are you trying to re-purpose an app engine config for kubernetes? they are totally different platforms. where are you setting the "app" field? are you trying to create a new deployment or update an existing one? there's too much going on here... – Travis Webb Nov 18 '19 at 06:44
  • @TravisWebb, I'm new to all these DevOps things. I started with more simple stuff, like app engine, then checked the Docker container deployment. Then started to read about Kubernetes. Yes, they are totally different, I understand. But I don't think it would be easy to understand Kubernetes docs easily without GCloud + Docker deployment process. There is the "app" field in "deployment" yaml file. If you could explain what is redundant in the configuration I would be happy to try them. – WeekendMan Nov 18 '19 at 06:53
  • @WeekendMan Ive tried to reproduce it however I didnt receive any issues. Please provide some information. 1. Are you using GCP (compute-engine > VM instances) or GKE? Im asking because GKE is a bit behind newest versions of kubernetes. Provide output of `$ kubectl version` because Deployments api: apps/v1 was introduced in kubernetes 1.16 and if you would use earlier version issue is probably somewhere else. 2. Is it possible to post your original YAML and the newone? – PjoterS Nov 18 '19 at 11:00
  • @TravisWebb, yes, it's GKE, not compute engine. But I had used compute engine before the GKE to work with Docker container. Then I started changing the config to use GKE. I've updated topic with "kubectl version" output. Previous yaml file you can see by the link in the first paragraph (it was my question too). I changed it exactly as in the first (and only) answer and comment under it. I just want to keep this question as short as possible and not double details, not because I'm lazy =) Thank you for helping! – WeekendMan Nov 18 '19 at 11:18
  • @WeekendMan Its hard to tell without seeing YAML before and after. It might be related to specific apiVersions, adding/remove some fields, typos, etc. – PjoterS Nov 19 '19 at 11:48
  • @PjoterS, added actual yaml file here. But, as I said, it's the same as in the linked answer aside from things mentioned in the comments. I don't think it's related to versioning because I was making all changes within 1 month or even less. But of course I may be wrong. – WeekendMan Nov 19 '19 at 12:02
  • You need to provide clarification on what you are doing. The last code chunk seems to be part of Cloud Build and so is the first part where `kubectl apply -f` fails (seems like the second build step). The information is mixed up and that makes very hard to provide an accurate answer. I suggest you to explain your scenario and the tool that you're using. Without looking at the actual deployment definition requested by PjoterS, is hard to tell why the label selector is complaining. – yyyyahir Nov 20 '19 at 14:44
  • @yyyyahir, I'm trying to deploy the application. It should work via manual terminal "deploy" command and on GCloud by push to specific branch. Which information is mixed up exactly? Tell me, maybe that would give me a good hint. This is the actual deployment definition in the question body. Yes, it fails on "apply" command. Ask me please exact questions and I'll try to answer, whatever you need. – WeekendMan Nov 21 '19 at 04:46
  • The error message seems to imply that there is something wrong with the labels in the deployment being applied using `kubectl apply -f`, yet, that file is not shared. Instead there is a file with Cloud Build [build steps](https://cloud.google.com/cloud-build/docs/configuring-builds/create-basic-configuration#creating_a_build_config) formatting, which is invalid K8s syntax. This leads to believe that you're using it. – yyyyahir Nov 21 '19 at 11:02
  • Furthermore, you seem to be deploying in 2 different ways. The first looks related to a Cloud Build step (it even says `Step #2`, related to the kubectl builder) and below, you seem to be using `kubectl create`. Not sure when `apply` is used other than the build step, which is valid. However, it might shed light if we can see what is actually being deployed as well as the order in which things are being deployed alongside the commands used for that. – yyyyahir Nov 21 '19 at 11:06
  • @yyyyahir, errr, yes, I haven't added deployment.yaml file. Sorry, I don't know how I could skip it. I've added it to the question, thanks. – WeekendMan Nov 22 '19 at 05:55
  • Just deployed your YAML and used `create`. Still puzzles me the first chunk that is related to Cloud Build. Anyway, can't make it error like in your case. The [answer](https://stackoverflow.com/a/58909680/10892354) given sounds convincing but without details steps to replicate might be something in your specific environment. – yyyyahir Dec 02 '19 at 17:43

2 Answers2

18

From apps/v1 on, a Deployment’s label selector is immutable after it gets created.

excerpt from Kubernetes's document:

Note: In API version apps/v1, a Deployment’s label selector is immutable after it gets created.

So, you can delete this deployment first, then apply it.

Kun Li
  • 2,570
  • 10
  • 15
  • I've just checked the deployment list in Google Cloud Panel, it's is empty, is that supposed to be? – WeekendMan Nov 18 '19 at 08:21
  • This isn't a viable option in production, we don't want any downtime that is why we choose a rolling deployment, how does one go around this issue? – Jean-Paul Dec 17 '19 at 10:42
  • 1
    You can still do rolling deployment, as far as you don't change the label selector of a deployment. Label selector connects a deployment to the pods it generates, so, when you change this, how does it know the pods that ever belong to it ? So, if you are certain that you want to change the label selector, you can start the deployment with a new name, then point your service to this new deployment, that's the way with least impact I can figure out . – Kun Li Dec 18 '19 at 04:21
  • This post has [a good explanation for Kubernetes' concept of immutability and its reasoning](https://stackoverflow.com/questions/62280327/why-are-some-kubernetes-resources-immutable-after-creation) ℹ️. – hc_dev Oct 27 '22 at 06:44
9

The MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable because it is different from your previous deployment.

Try looking at the existing deployment with kubectl get deployment -o yaml. I suspect the existing yaml has a different matchLables stanza.

Specifically your file has:

    matchLabels:
      app: myproject

my guess is the output of kubectl get deployment -o yaml while have something different, like:

    matchLabels:
      app: old-project-name

or

    matchLabels:
      app: myproject
      version: alpha

The new deployment cannot change the matchLabels stanza because, well, because it is immutable. That stanza in the new deployment must match the old. If you want to change it, you need to delete the old deployment with kubectl delete deployment myproject.

Note: if you do that in production your app will be unavailable for a while. (A much longer discussion about how to do this in production is not useful here.)

Mark P. Hahn
  • 91
  • 1
  • 3
  • Excellent explanation for the possible cause of this invalid deployment, including investigating `kubectl` commands. – hc_dev Oct 27 '22 at 06:32