I can (for instance) connect to the cluster compute nodes like this:
gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip
But if I try to set up my kubectl credentials like this:
gcloud container clusters get-credentials test-deploy --internal-ip
It complains:
ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy is not a private cluster.
I am able to do non-ssh type commands like kubectl get pods --all-namespaces
, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash
I get this error:
Error from server: error dialing backend: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-xxxxxxx"
BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.