0

I am new to google cloud manager (GCM) and am writing some code in order to practice. I have read some interesting articles that detail how I can use deploymentmanager.v2beta.typeprovider in order to extend GCM and use it to configure Kubernetes objects themselves as additional deployment. This is a very appealing behavior for extension and seems to open up great opportunities to extend declarative automation of any API which is cool.

I am attempting to create a private node/public endpoint GKE cluster that is managed by custom typeProvider resources which correspond to GKE api calls. It seems that public node/public endpoint GKE cluster is the only way to support GCM custom typeProviders and this seems wrong considering private node/public endpoint GKE configuration possible.

It seems it would be weird to not have deploymentmanager.v2beta.typeprovider support a private node/public endpoint GKE configuration.

as an aside... I feel a private node/private endpoint/Cloud Endpoint to expose it to the GCM typeProvider public API endpoint requirement should also be a valid architecture but I have yet to test.

Using the following code

def GenerateConfig(context):
    # Some constant type vars that are not really
    resources = []
    outputs = []
    gcp_type_provider = 'deploymentmanager.v2beta.typeProvider'
    extension_prefix = 'k8s'
    api_version = 'v1'
    kube_initial_auth = {
        'username': 'luis',
        'password': 'letmeinporfavors',
        "clientCertificateConfig": {
            'issueClientCertificate': True
        }
    }

    # EXTEND API TO CONTROL KUBERNETES BEGIN
    kubernetes_exposed_apis = [
        {
            'name': '{}-{}-api-v1-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'api/v1'
        },
        {
            'name': '{}-{}-apps-v1-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'apis/apps/v1'
        },
        {
            'name': '{}-{}-rbac-v1-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'apis/rbac.authorization.k8s.io/v1'
        },
        {
            'name': '{}-{}-v1beta1-extensions-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'apis/extensions/v1beta1'
        }
    ]
    for exposed_api in kubernetes_exposed_apis:
        descriptor_url = 'https://{}/swaggerapi/{}'.format(
            '$(ref.{}-k8s-cluster.endpoint)'.format(
                context.env['deployment']
            ),
            exposed_api['endpoint']
        )
        resources.append(
            {
                'name': exposed_api['name'],
                'type': gcp_type_provider,
                'properties': {
                    'options': {
                        'validationOptions': {
                            'schemaValidation': 'IGNORE_WITH_WARNINGS'
                        },
                        'inputMappings': [
                            {
                                'fieldName': 'name',
                                'location': 'PATH',
                                'methodMatch': '^(GET|DELETE|PUT)$',
                                'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'

                            },
                            {
                                'fieldName': 'metadata.name',
                                'location': 'BODY',
                                'methodMatch': '^(PUT|POST)$',
                                'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
                            },
                            {
                                'fieldName': 'Authorization',
                                'location': 'HEADER',
                                'value': '$.concat("Bearer ", $.googleOauth2AccessToken())'
                            }
                        ],
                    },
                    'descriptorUrl': descriptor_url
                },
            }
        )
    # EXTEND API TO CONTROL KUBERNETES END

    # NETWORK DEFINITION BEGIN
    resources.append(
        {
            'name': "{}-network".format(context.env['deployment']),
            'type': "compute.{}.network".format(api_version),
            'properties': {
                'description': "{} network".format(context.env['deployment']),
                'autoCreateSubnetworks': False,
                'routingConfig': {
                    'routingMode': 'REGIONAL'
                }
            },
        }
    )

    resources.append(
        {
            'name': "{}-subnetwork".format(context.env['deployment']),
            'type': "compute.{}.subnetwork".format(api_version),
            'properties': {
                'description': "{} subnetwork".format(
                    context.env['deployment']
                ),
                'network': "$(ref.{}-network.selfLink)".format(
                    context.env['deployment']
                ),
                'ipCidrRange': '10.64.1.0/24',
                'region': 'us-east1',
                'privateIpGoogleAccess': True,
                'enableFlowLogs': False,
            }
        }
    )
    # NETWORK DEFINITION END

    # EKS DEFINITION BEGIN
    resources.append(
        {
            'name': "{}-k8s-cluster".format(context.env['deployment']),
            'type': "container.{}.cluster".format(api_version),
            'properties': {
                'zone': 'us-east1-b',
                'cluster': {
                    'description': "{} kubernetes cluster".format(
                        context.env['deployment']
                    ),
                    'privateClusterConfig': {
                        'enablePrivateNodes': False,
                        'masterIpv4CidrBlock': '10.0.0.0/28'
                    },
                    'ipAllocationPolicy': {
                        'useIpAliases': True
                    },
                    'nodePools': [
                        {
                            'name': "{}-cluster-pool".format(
                                context.env['deployment']
                            ),
                            'initialNodeCount': 1,
                            'config': {
                                'machineType': 'n1-standard-1',
                                'oauthScopes': [
                                    'https://www.googleapis.com/auth/compute',
                                    'https://www.googleapis.com/auth/devstorage.read_only',
                                    'https://www.googleapis.com/auth/logging.write',
                                    'https://www.googleapis.com/auth/monitoring'
                                ],
                            },
                            'management': {
                                'autoUpgrade': False,
                                'autoRepair': True
                            }
                        }],
                    'masterAuth': kube_initial_auth,
                    'loggingService': 'logging.googleapis.com',
                    'monitoringService': 'monitoring.googleapis.com',
                    'network': "$(ref.{}-network.selfLink)".format(
                        context.env['deployment']
                    ),
                    'clusterIpv4Cidr': '10.0.0.0/14',
                    'subnetwork': "$(ref.{}-subnetwork.selfLink)".format(
                        context.env['deployment']
                    ),
                    'enableKubernetesAlpha': False,
                    'resourceLabels': {
                        'purpose': 'expiramentation'
                    },
                    'networkPolicy': {
                        'provider': 'CALICO',
                        'enabled': True
                    },
                    'initialClusterVersion': 'latest',
                    'enableTpu': False,
                }
            }
        }
    )
    outputs.append(
        {
            'name': '{}-cluster-endpoint'.format(
                context.env['deployment']
            ),
            'value': '$(ref.{}-k8s-cluster.endpoint)'.format(
                context.env['deployment']
            ),
        }
    )
    # EKS DEFINITION END

    # bring it all together
    template = {
        'resources': resources,
        'outputs': outputs
    }

    # give it to google
    return template


if __name__ == '__main__':
    GenerateConfig({})

Also, I will note the subsequent hello world template which uses the created typeProviders above.

def current_config():
    '''
    get the current configuration
    '''
    return {
        'name': 'atrium'
    }


def GenerateConfig(context):
    resources = []
    conf = current_config()
    resources.append(
        {
            'name': '{}-svc'.format(conf['name']),
            'type': "{}/{}-k8s-api-v1-type:/api/v1/namespaces/{}/pods".format(
                context.env['project'],
                conf['name'],
                '{namespace}'
            ),
            'properties': {
                'namespace': 'default',
                'apiVersion': 'v1',
                'kind': 'Pod',
                'metadata': {
                    'name': 'hello-world',
                },
                'spec': {
                    'restartPolicy': 'Never',
                    'containers': [
                        {
                            'name': 'hello',
                            'image': 'ubuntu:14.04',
                            'command': ['/bin/echo', 'hello', 'world'],
                        }
                    ]
                }
            }
        }
    )

    template = {
        'resources': resources,
    }

    return template


if __name__ == '__main__':
    GenerateConfig({})

If I leave enablePrivateNodes as False

                    'privateClusterConfig': {
                        'enablePrivateNodes': False,
                        'masterIpv4CidrBlock': '10.0.0.0/28'
                    }

I get this as a response

~/code/github/gcp/expiramentation/atrium_gcp_infra 24s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14]...done.
Delete operation operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14 completed successfully.
Waiting for delete [operation-1562105393399-58cba01134528-be47dc30-755cb106]...done.
Delete operation operation-1562105393399-58cba01134528-be47dc30-755cb106 completed successfully.
The fingerprint of the deployment is IiWcrdbZA5MedNlJLIicOg==
Waiting for create [operation-1562105786056-58cba187abee2-5d761e87-b446baca]...done.
Create operation operation-1562105786056-58cba187abee2-5d761e87-b446baca completed successfully.
NAME                                TYPE                                   STATE      ERRORS  INTENT
atrium-k8s-api-v1-type              deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-apps-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-cluster                  container.v1.cluster                   COMPLETED  []
atrium-k8s-rbac-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-v1beta1-extensions-type  deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-network                      compute.v1.network                     COMPLETED  []
atrium-subnetwork                   compute.v1.subnetwork                  COMPLETED  []
The fingerprint of the deployment is QJ2NS5EhjemyQJThUWYNHA==
Waiting for create [operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb]...done.
Create operation operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb completed successfully.
NAME        TYPE                                                                      STATE      ERRORS  INTENT
atrium-svc  atrium-244423/atrium-k8s-api-v1-type:/api/v1/namespaces/{namespace}/pods  COMPLETED  []

~/code/github/gcp/expiramentation/atrium_gcp_infra 13m 48s

this is a good response and my custom typeProvider resource creates correctly using the API's of my freshly created cluster.

If I make this cluster have private nodes however... with

                     'privateClusterConfig': {
                         'enablePrivateNodes': True,
                         'masterIpv4CidrBlock': '10.0.0.0/28'
                     },

I fail with

~/code/github/gcp/expiramentation/atrium_gcp_infra 56s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562106572016-58cba47538c93-d34c17fc-8b863765]...done.
Delete operation operation-1562106572016-58cba47538c93-d34c17fc-8b863765 completed successfully.
Waiting for delete [operation-1562106592237-58cba4888184f-a5bc3135-4e662eed]...done.
Delete operation operation-1562106592237-58cba4888184f-a5bc3135-4e662eed completed successfully.
The fingerprint of the deployment is dk5nh_u5ZFFvYO-pCXnFBg==
Waiting for create [operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd]...done.
Create operation operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd completed successfully.
NAME                                TYPE                                   STATE      ERRORS  INTENT
atrium-k8s-api-v1-type              deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-apps-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-cluster                  container.v1.cluster                   COMPLETED  []
atrium-k8s-rbac-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-v1beta1-extensions-type  deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-network                      compute.v1.network                     COMPLETED  []
atrium-subnetwork                   compute.v1.subnetwork                  COMPLETED  []
The fingerprint of the deployment is 4RnscwpcYTtS614VXqtjRg==
Waiting for create [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]: errors:
- code: ERROR_PROCESSING_REQUEST
  message: 'Error fetching URL https://10.0.0.2:443/api/v1/namespaces/default/pods,
    reason: ERROR_EXCLUDED_IP'

The 10.0.0.2 seems to be the private endpoint of my cluster. I am having a difficult time tracking down where I can override the https://10.0.0.2:443/api/v1/namespaces/default/pods url host such that it would attempt to contact the publicEndpoint rather than the privateEndpoint.

If this call were to go out to the public endpoint it would create successfully I believe. As an interesting aside the typeProvider declarations in the descriptorUrl do attempt to hit the publicEndpoint of the cluster and are successful in doing so. However, despite this indication the creation of the actual api resources such as the hello world example attempt to interface with the private endpoint.

I feel this behavior should be overridable somewhere but I am failing to find this clue.

I have tried both a working public node configuration and a non-working private node configuration

Luis
  • 41
  • 2
  • can you describe the private cluster? The endpoint should still be public unless you define the "enablePrivateEndpoint" field as true. The RESTful definition of the cluster should only have 1 endpoint value and, from your code, it looks like it should be referencing that one. Regardless, it might be worth adding the "enablePrivateEndpoint" field to your code to force the external endpoint – Patrick W Jul 19 '19 at 14:59

1 Answers1

0

So I ran into the same problem writing [this] (https://github.com/Aahzymandius/k8s-workshops/tree/master/8-live-debugging). While trying to debug this I ran into 2 issues

  1. The public endpoont is used initially to fetch the swagger definition of the various k8s apis and resources, however, in that response is included the private endpoint for the API server which causes the k8s type to try to use that ip.

  2. While further trying debug that error plus a new one I was hitting, I discovered that gke 1.14.x or later does not support insecure calls to the k8s api which caused the k8s type to fail even with fully public clusters.

Since the functionality seems to fail with newer versions of GKE, I just stopped trying to debug it. Though I'd recommend reporting an issue on the github repo for it. This functionality would be great to keep going

Patrick W
  • 4,603
  • 1
  • 12
  • 26