0

I am running a python grpc server and using envoy to connect to it through client. The envoy is deployed in GKE. I am attaching the envoy deployment yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: envoy-deployment
  labels:
    app: envoy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: envoy
  template:
    metadata:
      labels:
        app: envoy
    spec:
      containers:
      - name: envoy
        image: envoyproxy/envoy:v1.22.5
        ports:
        - containerPort: 9901
        livenessProbe:
          httpGet:
            path: /healthz
            port: 9901
          initialDelaySeconds: 60
          timeoutSeconds: 5
          periodSeconds: 10
          failureThreshold: 2
        readinessProbe:
          httpGet:
            path: /healthz
            port: 9901
          initialDelaySeconds: 30
          timeoutSeconds: 5
          periodSeconds: 10
          failureThreshold: 2
        volumeMounts:
        - name: config
          mountPath: /etc/envoy
      volumes:
      - name: config
        configMap:
          name: envoy-conf
---
apiVersion: v1
kind: Service
metadata:
  name: envoy-deployment-service
spec:
  ports:
  - protocol: TCP
    port: 9903
    targetPort: 9901
    name: grpc
  selector:
    app: envoy
  type: LoadBalancer
  externalTrafficPolicy: Local
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: envoy-ingress-prod
  annotations:
    kubernetes.io/ingress.global-static-ip-name: envoy-ip
    kubernetes.io/ingress.allow-http: "false"
    cert-manager.io/issuer: random-issuer

  labels:
    name: envoy-ingress-app
spec:
  tls:
  - hosts:
    - <domain name>
    secretName: <secret name>
  rules:
  - host: <domain name>
    http:
      paths:
      - path: /*
        pathType: ImplementationSpecific
        backend:
          service:
            name: envoy-deployment-service
            port:
              number: 9903
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-conf
data:
  envoy.yaml: |
    admin:
      access_log_path: /dev/stdout
      address:
        socket_address: { address: 127.0.0.1, port_value: 9902 }

    static_resources:
      listeners:
        - name: listener_0
          address:
            socket_address: { address:  0.0.0.0, port_value: 9901 }
          filter_chains:
            - filters:
              - name: envoy.filters.network.http_connection_manager
                typed_config:
                  "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                  codec_type: auto
                  stat_prefix: ingress_http
                  route_config:
                    name: local_route
                    virtual_hosts:
                      - name: envoy_service
                        domains: ["*"]
                        routes:
                        - match:
                           prefix: "/healthz"
                          direct_response: { status: 200, body: { inline_string: "ok it is working now" } }
                        - match:
                           prefix: "/heal"
                          direct_response: { status: 200, body: { inline_string: "ok heal is working now" } }
                        - match:
                           prefix: "/"
                     
                          route: {
                            prefix_rewrite: "/",
                            cluster: envoy_service
                          }
                        cors:
                          allow_origin_string_match:
                            - prefix: "*"
                          allow_methods: GET, PUT, DELETE, POST, OPTIONS
                          allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                          max_age: "1728000"
                          expose_headers: custom-header-1,grpc-status,grpc-message
                  http_filters:
                    - name: envoy.filters.http.cors
                      typed_config:
                        "@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
                    - name: envoy.filters.http.grpc_web
                      typed_config:
                        "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
                    - name: envoy.filters.http.router
                      typed_config:
                        "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
      clusters:
        - name: envoy_service
          connect_timeout: 0.25s
          type: strict_dns
          http2_protocol_options: {}
          lb_policy: round_robin
          load_assignment:
            cluster_name: envoy_service
            endpoints:
              - lb_endpoints:
                - endpoint:
                    address:
                      socket_address:
                        address: app-server-headless.default.svc.cluster.local
                        port_value: 8000

My python grpc server looks like this:-

server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    master_pb2_grpc.add_EventBusOneofServiceServicer_to_server(
        EventBusServiceServicer(), server
    )
    server.add_insecure_port("0.0.0.0:8000")
    server.start()
    print("server started")

    def handle_sigterm(*_):
        print("Received shutdown signal")
        all_rpcs_done_event = server.stop(30)
        all_rpcs_done_event.wait(30)
        print("Shut down gracefully")

    signal(SIGTERM, handle_sigterm)
    server.wait_for_termination()

My grpc python client looks like this:-

class ExampleServiceClient(object):
    def __init__(self):
        """Initializer.
           Creates a gRPC channel for connecting to the server.
           Adds the channel to the generated client stub.
        Arguments:
            None.

        Returns:
            None.
        """
       
        self.channel = grpc.secure_channel("domain name", grpc.ssl_channel_credentials(), options=(('grpc.enable_http_proxy', 0),))
        self.stub = master_pb2_grpc.EventBusOneofServiceStub(self.channel)

    def receiveEvent(self, request):
        """Gets a user.
        Arguments:
            name: The resource name of a user.

        Returns:
            None; outputs to the terminal.
        """

        try:
            print(request)
         
            response = self.stub.ReceiveOneofEvent(request)
            print("User fetched.")
            print(response)
        except grpc.RpcError as err:
            print(err)
            print(err.details())  # pylint: disable=no-member
            print("{}, {}".format(err.code().name, err.code().value))  #


if __name__ == "__main__":
    os.environ['GRPC_TRACE'] = 'all'
    os.environ['GRPC_VERBOSITY'] = 'DEBUG'
    if os.environ.get('https_proxy'):
        print("yes proxy present")
        del os.environ['https_proxy']
    if os.environ.get('http_proxy'):
        print("yes proxy present")
        del os.environ['http_proxy']

    for x in range(1,2):
        client = ExampleServiceClient()
        from google.protobuf.json_format import Parse, MessageToJson
        msg = master_pb2.ReceiveOneofEventRequest()
        msg.r.first_name = "a"
        msg.r.last_name = "b"
        msg.r.email = "c"
        client.receiveEvent(msg)

When I make calls using client, I get the error:-

<_InactiveRpcError of RPC that terminated with:
    status = StatusCode.UNAVAILABLE
    details = "no healthy upstream"
    debug_error_string = "{"created":"@1667414018.866519000","description":"Error received from peer ipv4:IP:443","file":"src/core/lib/surface/call.cc","file_line":967,"grpc_message":"no healthy upstream","grpc_status":14}"
>
no healthy upstream
UNAVAILABLE, (14, 'unavailable')

What would be cause of this error? I don't see anything in envoy logs.

ak1234
  • 201
  • 2
  • 10
  • I don't see the Deployment and Service resources relative to your grpc server in your question, so just to be sure, is there a service running in your cluster corresponding to `app-server-headless.default.svc.cluster.local` and listening on port 8000 (the cluster address and port you have set in your Envoy config)? Can you access this service directly from a pod? To test this, you can `kubectl run --rm -it python -- bash`, copy your sources, and run your grpc client from inside the pod (don't forget to change your code to connect to `app-server-headless.default.svc.cluster.local:8000`). – norbjd Nov 05 '22 at 16:23
  • @norbjd changing the service name to "**app-server-headless**" did the trick. However now I get different error `{"grpc_message":"Stream removed","grpc_status":2}`. Any idea what might be causing this issue? – ak1234 Nov 07 '22 at 07:24
  • Unfortunately no :( did it work outside Kubernetes? You may want to debug locally your app first. – norbjd Nov 09 '22 at 11:54
  • This works locally and without ingress as well. Only when I add ingress these wierd error comes up. – ak1234 Nov 09 '22 at 14:05

0 Answers0