0

I'm starting with a new project, default VPC, and forked fluxcd/flux2-kustomize-helm-example github repository.

When I attempted to flux bootstrap into a clean new PRIVATE Autopilot K8s cluster, nothing became available (see below). The pods were stuck at ImagePullBackOff and the log traces looked like everything was in airplane mode.

I suspect I need to open up a Cloud NAT access to ghcr.io/fluxcd/helm-controller, github.com/fluxcd, et. al. unless there is a fluxcd mirror within gcr.io.

NAME                                           READY   STATUS             RESTARTS   AGE
pod/helm-controller-57ff7dd7b5-nnpm8           0/1     ImagePullBackOff   0          4m50s
pod/kustomize-controller-9f9bf46d9-wzcdr       0/1     ImagePullBackOff   0          4m50s
pod/notification-controller-64496c6d67-g6wpx   0/1     ImagePullBackOff   0          4m50s
pod/source-controller-7467658dcb-t6bsp         0/1     ImagePullBackOff   0          4m50s

NAME                              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/notification-controller   ClusterIP   10.42.1.103   <none>        80/TCP    4m51s
service/source-controller         ClusterIP   10.42.3.58    <none>        80/TCP    4m51s
service/webhook-receiver          ClusterIP   10.42.1.217   <none>        80/TCP    4m51s

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/helm-controller           0/1     1            0           4m51s
deployment.apps/kustomize-controller      0/1     1            0           4m51s
deployment.apps/notification-controller   0/1     1            0           4m51s
deployment.apps/source-controller         0/1     1            0           4m51s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/helm-controller-57ff7dd7b5           1         1         0       4m50s
replicaset.apps/kustomize-controller-9f9bf46d9       1         1         0       4m50s
replicaset.apps/notification-controller-64496c6d67   1         1         0       4m50s
replicaset.apps/source-controller-7467658dcb         1         1         0       4m50s
Stevko
  • 4,345
  • 6
  • 39
  • 66
  • Can you share the describe result in the specific pods that you are having trouble with? Just wanted to check the information from the pods. – Yvan G. Jan 10 '23 at 20:04
  • 1
    Can you make sure that Google Private Access is enabled on the subnet where your cluster is ? You don't need Cloud NAT if Private Access is enabled. Google Private Access is the feature that allows things inside GCP to reach out to Google API's (including gcr) without having access to the internet. – boredabdel Jan 12 '23 at 10:36

0 Answers0