34

When I run any kubectl command I get following WARNING:

W0517 14:33:54.147340   46871 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke

I have followed the instructions in the link several times but the WARNING keeps appearing making kubectl output uncomfortable to read.

OS:

cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04 LTS"

kubectl version:

Client Version: v1.24.0
Kustomize Version: v4.5.4

gke-gcloud-auth-plugin:

Kubernetes v1.23.0-alpha+66064c62c6c23110c7a93faca5fba668018df732

gcloud version:

Google Cloud SDK 385.0.0
alpha 2022.05.06
beta 2022.05.06
bq 2.0.74
bundled-python3-unix 3.9.12
core 2022.05.06
gsutil 5.10

I "login" with:

gcloud init

and then:

gcloud container clusters get-credentials cluster_name --region my-region

finally:

myyser@mymachine:/$ k get pods -n madeupns
W0517 14:50:10.570103   50345 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
No resources found in madeupns namespace.

How can I remove the WARNING or fix the problem?

Removing my .kube/config and re-running get-credentials didn't work.

Alexander Meise
  • 1,328
  • 2
  • 15
  • 31
  • 5
    Did you set/export `USE_GKE_GCLOUD_AUTH_PLUGIN=True` before running `gcloud container clusters get-credentials` again? You should be able to detect the change in the `users` section of `${HOME}/.kube/config`. I've not tried confirming that my own config is updated but will look tomorrow when I create a cluster. It **may** be that the `kubectl` warning is static and doesn't itself check that you've updated the plugin. – DazWilkin May 18 '22 at 00:22
  • You are right @DazWilkin there was a typo in my bashrc and fixing it removed the warning. – Alexander Meise May 18 '22 at 15:34
  • 1
    I'm please to hear that you resolved it. I am going to try it for myself this morning. – DazWilkin May 18 '22 at 15:37
  • 1
    @AlexanderMeise Good job on finding the solution to your own question. Could you please post your answer as a formal answer to help other users that have a similar problem? – Rogelio Monter May 18 '22 at 17:20
  • 1
    I just want to add that I'm on Windows, and encountered the same issue. The issue was resolved by 1. adding `USE_GKE_GCLOUD_AUTH_PLUGIN=True` to env variables, 2. restarting Windows Terminal, 3. running `gcloud container clusters get-credentials CLUSTER_NAME`, as described by @DazWilkin. The environment variables update was not registered the first time I ran `gcloud container...` because I had not restarted the terminal, which was the root cause of my confusion. – Adrian Wiik Nov 29 '22 at 11:51

4 Answers4

44

I fixed this problem by adding the correct export in .bashrc

export USE_GKE_GCLOUD_AUTH_PLUGIN=True

After sourcing .bashrc with . ~/.bashrc and reloading cluster config with:

gcloud container clusters get-credentials clustername

the warning dissapeared:

user@laptop:/$ k get svc -A
NAMESPACE     NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP    
kube-system   default-http-backend   NodePort       10.10.13.157   <none>         
kube-system   kube-dns               ClusterIP      10.10.0.10     <none>         
kube-system   kube-dns-upstream      ClusterIP      10.10.13.92    <none>         
kube-system   metrics-server         ClusterIP      10.10.2.191    <none>         
Alexander Meise
  • 1,328
  • 2
  • 15
  • 31
  • 4
    It did the trick for my Dockerfile version on our GCP manager. It's important the usage of "export USE_GKE_GCLOUD_AUTH_PLUGIN=True" instead of "USE_GKE_GCLOUD_AUTH_PLUGIN=True" that the GCP article suggest – Gonzalo Cao May 31 '22 at 17:34
  • Thanks your comment deserves 10 upvotes. :) – OpenBSDNinja Aug 04 '22 at 12:02
  • thx! "gcloud container clusters get-credentials clustername" was the only thing I needed to get it working – FrankyHollywood Nov 22 '22 at 13:25
10

Got a similar issue, while connecting to a fresh Kubernetes cluster having a version v1.22.10-gke.600

gcloud container clusters get-credentials my-cluster --zone europe-west6-b --project project

and got the below error, as seems like now its become error for the newer version

Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable. Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke

enter image description here

fix that worked for me

gcloud components install gke-gcloud-auth-plugin
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials my-cluster --zone europe-west6-b --project project

Adiii
  • 54,482
  • 7
  • 145
  • 148
10

You need to do the following things to avoid this warning message now and to avoid errors in the future.

  1. Add the correct export in .bashrc. I am using .zshrc instead of .bashrc so added export in .zshrc

    export USE_GKE_GCLOUD_AUTH_PLUGIN=True
    
  2. Reload .bashrc

    source ~/.bashrc
    
  3. Update gcloud to the latest version.

    gcloud components update
    
  4. Run the following command. Replace the CLUSTER_NAME with the name of your cluster. This will force the kubeconfig for this cluster to be updated to the Client-go Credential Plugin configuration.

    gcloud container clusters get-credentials CLUSTER_NAME
    
  5. Check kubeconfig file by enter the following command. Now you should be able to detect the changes(gke-gcloud-auth-plugin) in the kubeconfig file in the users section in the Root/Home directory

    cat ~/.kube/config
    

The reason behind this is:

kubectl starting the version from v1.26 will no longer have a built-in authentication mechanism for GKE. So GKE users will need to download and use a separate authentication plugin to generate GKE-specific tokens to support the authentication of GKE. To get more details please read here.

Jitendra Rathor
  • 607
  • 8
  • 11
1

After upgrading to the GKE Gcloud auth plugin, all my kubectl commands started to timeout.

It turns out I had forgotten to add the --internal-ip flag to the get-credentials command, which was needed in my case.

gcloud container clusters get-credentials CLUSTER_NAME --internal-ip