1

We have been working for some months in creating an internal network across all our projects in Google Cloud by using SharedVPC and we have encountered no problems whatsoever while setting up compute instances under the different subnetworks assigned to each project.

The problem, however, has appeared when we try to create a Kubernetes cluster. For some reason, when we try to do so we obtain the following error:

Error 403: Google Compute Engine: Required 'compute.networks.get' permission for 'projects/host-project/global/networks/subnet-required'

I checked that all the proper permissions were in place as described in Google's GKE using SharedVPC example documentation, and even reenabled the APIs and set up the permissions suggested in the documentation from scratch. Still, it seems that the problem persists.

The command used to try to deploy the cluster is:

gcloud container clusters create test \
    --project <target-project> \
    --zone=us-east1-c \
    --enable-ip-alias \
    --network projects/<host-project>/global/networks/<vpc-network> \
    --subnetwork projects/<host-project>/regions/us-east1/subnetworks/<subnet-required> \
    --cluster-secondary-range-name k8s-cluster-range \
    --services-secondary-range-name k8s-services-range

Container Engine and Compute Engine API service accounts have been granted roles/compute.networkUser and roles/container.hostServiceAgentUser as in the documentation.

Has anyone found this problem or know what can be causing this error?

Thanks!

1 Answers1

1

It took some time, as I've tried to find the solution to this issue for some days already, but it seems I finally managed to find what was needed to make this work.

I found out that gcloud beta container subnets list-usable --project <client-project> --network-project <host-project> displayed the subnet as available but didn't display the secondary ranges defined within, and it also displayed another 403 error. With that information in hand I checked the APIs again to see if there was any problem with them, even having disabled and enabled them previously.

It seems that gcloud services enable container.googleapis.com --project <host-project>, which should have created a Kubernetes Engine API service account, didn't behave as expected and didn't create the service account.

Interestingly, when the Kubernetes Engine API was disabled from the command line and enabled again, it didn't create the service account either. The only way I managed to work around this issue was to enable the API through the console. After doing that the service account was created successfully.

With the service account created in the host project, the client projects' service accounts managed to reach the host project without any issues.

Hopefully if anyone finds this problem again they'll manage to find the solution here!

  • 1
    suggest you to rise this issue on [issue tracker](https://issuetracker.google.com) google will troubleshoot and fix the behavior. – Alioua Feb 21 '19 at 01:21
  • That's a good idea @Alioua , though I'm waiting for my colleague to come back from holidays to check if we have the same issue with a different deployment before reporting it as an issue, as it might have been provoked by some other operations in the host project that I'm not aware of. – Daniel Sanchez Feb 21 '19 at 11:50