6

So i have this project that i already deployed in GKE and i am trying to make the CI/CD from github action. So i added the workflow file which contains

name: Build and Deploy to GKE

on:
  push:
    branches:
      - main

env:
  PROJECT_ID: ${{ secrets.GKE_PROJECT }}
  GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }}    # Add your cluster name here.
  GKE_ZONE: ${{ secrets.GKE_ZONE }}   # Add your cluster zone here.
  DEPLOYMENT_NAME: ems-app # Add your deployment name here.
  IMAGE: ciputra-ems-backend

jobs:
  setup-build-publish-deploy:
    name: Setup, Build, Publish, and Deploy
    runs-on: ubuntu-latest
    environment: production

    steps:
    - name: Checkout
      uses: actions/checkout@v2

    # Setup gcloud CLI
    - uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7
      with:
        service_account_key: ${{ secrets.GKE_SA_KEY }}
        project_id: ${{ secrets.GKE_PROJECT }}

    # Configure Docker to use the gcloud command-line tool as a credential
    # helper for authentication
    - run: |-
        gcloud --quiet auth configure-docker

    # Get the GKE credentials so we can deploy to the cluster
    - uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e
      with:
        cluster_name: ${{ env.GKE_CLUSTER }}
        location: ${{ env.GKE_ZONE }}
        credentials: ${{ secrets.GKE_SA_KEY }}

    # Build the Docker image
    - name: Build
      run: |-
        docker build \
          --tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
          --build-arg GITHUB_SHA="$GITHUB_SHA" \
          --build-arg GITHUB_REF="$GITHUB_REF" \
          .

    # Push the Docker image to Google Container Registry
    - name: Publish
      run: |-
        docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"

    # Set up kustomize
    - name: Set up Kustomize
      run: |-
        curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
        chmod u+x ./kustomize

    # Deploy the Docker image to the GKE cluster
    - name: Deploy
      run: |-
        ./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA
        ./kustomize build . | kubectl apply -k ./
        kubectl rollout status deployment/$DEPLOYMENT_NAME
        kubectl get services -o wide

but when the workflow gets to the deploy part, it shows an error

The Service "ems-app-service" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update

Now i have searched that this is actually not true because the resourceVersion is supposed to change for every update so i just removed it

Here is my kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - service.yaml
  - deployment.yaml

my deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  generation: 1
  labels:
    app: ems-app
  name: ems-app
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ems-app
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: ems-app
    spec:
      containers:
      - image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c
        imagePullPolicy: IfNotPresent
        name: ems-sha256-1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

and my service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/neg: '{"ingress":true}'
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app: ems-app
  name: ems-app-service
  namespace: default
spec:
  clusterIP: 10.88.10.114
  clusterIPs:
  - 10.88.10.114
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 30261
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: ems-app
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 34.143.255.159

RicHincapie
  • 3,275
  • 1
  • 18
  • 30
Malik
  • 236
  • 2
  • 3
  • 14
  • Which cluster version are you using? – Chandra Kiran Pasumarti Mar 23 '22 at 05:33
  • i deploy it originally from a dockerfile, i don't create cluster manually with GKE Standard or autopilot if that's what you mean – Malik Mar 23 '22 at 06:26
  • 1
    Couple of options: 1) remove `clusterIP` from your service spec 2) run `kubectl annotate svcems-app-service kubectl.kubernetes.io/last-applied-configuration-` prior to applying your service update – Gari Singh Mar 23 '22 at 09:49
  • 5
    This is because there's a **resourceVersion** field in **last-applied-configuration annotation**, which is not expected. Remove the kubectl.kubernetes.io/last-applied-configuration annotation by running the command below and update the service again. "**kubectl annotate svc my-service kubectl.kubernetes.io/last-applied-configuration-"** The - on the end of the annotation tells Kubernetes to remove the annotation entirely. – Chandra Kiran Pasumarti Mar 23 '22 at 09:54
  • i tried both of your suggestions and it works with a little warning, but i searched that it can be ignored, but then i get another problem, which is the changes that i made to files that was build is not there, and i check at the revision details that my revision is not deployed. but then i found that that i set the kustomize set image to repository instead of container image so i change it to this ```./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/${{ env.PROJECT_ID }}/${{ env.IMAGE }}:${{ github.sha }}``` but the new revision still hasn't been deployed – Malik Mar 23 '22 at 13:19
  • Are you executing the command in the correct folder? – Chandra Kiran Pasumarti Mar 24 '22 at 17:31
  • Thanks @ChandraKiranPasumarti I had same issue " metadata.resourceVersion: Invalid value: "": must be specified for an update" i used "kubectl annotate svc kubectl.kubernetes.io/last-applied-configuration-" and applied my svc.yaml file again and it worked . although it gave warning "Warning: resource services is missing the kubectl.kubernetes.io/last-applied-configuration annotation" i ignored it – Ripunjay Godhani Jun 07 '22 at 17:31
  • @Malik Has your issue been resolved? If yes, can you post the procedure you've followed as Solution and accept it. – Fariya Rahmat Jun 21 '22 at 10:38
  • The solution posted by @ChandraKiranPasumarti works, please add it as an answer – Ahmed Jun 23 '22 at 10:25

2 Answers2

14

As the title of this question is more Kubernetes related than GCP related, I will answer since I had this same problem using AWS EKS.

How to fix metadata.resourceVersion: Invalid value: 0x0: must be specified for an update is an error that may appear when using kubectl apply

Kubectl apply makes a three-way-merge between your local file, the live kubernetes object manifest and the annotation kubectl.kubernetes.io/last-applied-configuration in that live object manifest.

So, for some reason, the value resourceVersion managed to be written in your last-applied-configuration, probably because of someone exporting the live manifests to a file, modifying it, and applying it back again.

When you try to apply your new local file that doesn't have that value -and should not have it-, but the value is present in the last-applied-configuration, it thinks it should be removed from thye live manifest and specifically send it in the subsequent patch operation like resourceVersion: null, which should get rid of it. But it won't work and the local file breakes the rules (out of my knowledge as now) and becomes invalid.

As feichashao mentions, the way to solve it is to delete the last-applied-configuration annotation and apply again your local file.

Once you did solved, you kubectl apply output will be like:

Warning: resource <your_resource> is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.

And your live manifests will be updated.

RicHincapie
  • 3,275
  • 1
  • 18
  • 30
0

In case anyone still having this problem, I may not be able to help if you still want to use GKE, But you can try the answer from @ChandraKiranPasumarti. For me personally, my senior only require me to containerize our app so I use Google Cloud Run Instead for easier deployments and CI/CD. You can use this file to use CI/CD in Cloud Run

https://github.com/google-github-actions/setup-gcloud/blob/main/example-workflows/cloud-run/cloud-run.yml

Just make sure you've added secret from service account json in you repo, then select the credentials json for authentication in your yml file

Malik
  • 236
  • 2
  • 3
  • 14