I am running a python grpc server and using envoy to connect to it through client. The envoy is deployed in GKE. I am attaching the envoy deployment yaml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: envoy-deployment
labels:
app: envoy
spec:
replicas: 1
selector:
matchLabels:
app: envoy
template:
metadata:
labels:
app: envoy
spec:
containers:
- name: envoy
image: envoyproxy/envoy:v1.22.5
ports:
- containerPort: 9901
livenessProbe:
httpGet:
path: /healthz
port: 9901
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 2
readinessProbe:
httpGet:
path: /healthz
port: 9901
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
failureThreshold: 2
volumeMounts:
- name: config
mountPath: /etc/envoy
volumes:
- name: config
configMap:
name: envoy-conf
---
apiVersion: v1
kind: Service
metadata:
name: envoy-deployment-service
annotations:
cloud.google.com/backend-config: '{"ports": {"9903":"envoy-app-backend-config"}}'
spec:
ports:
- protocol: TCP
port: 9903
targetPort: 9901
selector:
app: envoy
type: LoadBalancer
externalTrafficPolicy: Local
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: envoy-app-backend-config
spec:
customRequestHeaders:
headers:
- "TE:trailers"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: envoy-ingress-prod
namespace: seshat
annotations:
kubernetes.io/ingress.global-static-ip-name: envoy-ingress
kubernetes.io/ingress.allow-http: "false"
cert-manager.io/issuer: superset-issuer
cloud.google.com/backend-config: '{"default": "envoy-app-backend-config"}'
labels:
name: envoy-ingress-app
spec:
tls:
- hosts:
- example.com
secretName: example-tls
rules:
- host: example.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: envoy-deployment-service
port:
number: 9903
---
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-conf
data:
envoy.yaml: |
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 127.0.0.1, port_value: 9902 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: "/dev/stdout"
typed_json_format:
"@timestamp": "%START_TIME%"
client.address: "%DOWNSTREAM_REMOTE_ADDRESS%"
client.local.address: "%DOWNSTREAM_LOCAL_ADDRESS%"
envoy.route.name: "%ROUTE_NAME%"
envoy.upstream.cluster: "%UPSTREAM_CLUSTER%"
host.hostname: "%HOSTNAME%"
http.request.body.bytes: "%BYTES_RECEIVED%"
http.request.duration: "%DURATION%"
http.request.headers.bytes: "%REQUEST_HEADERS_BYTES%"
http.request.headers.accept: "%REQ(ACCEPT)%"
http.request.headers.authority: "%REQ(:AUTHORITY)%"
http.request.headers.te: "%REQ(:TE)%"
http.request.headers.id: "%REQ(X-REQUEST-ID)%"
http.request.headers.x_forwarded_for: "%REQ(X-FORWARDED-FOR)%"
http.request.headers.x_forwarded_proto: "%REQ(X-FORWARDED-PROTO)%"
http.request.headers.x_b3_traceid: "%REQ(X-B3-TRACEID)%"
http.request.headers.x_b3_parentspanid: "%REQ(X-B3-PARENTSPANID)%"
http.request.headers.x_b3_spanid: "%REQ(X-B3-SPANID)%"
http.request.headers.x_b3_sampled: "%REQ(X-B3-SAMPLED)%"
http.request.method: "%REQ(:METHOD)%"
http.response.body.bytes: "%BYTES_SENT%"
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: envoy_service
domains: ["*"]
routes:
- match:
prefix: "/healthz"
direct_response: { status: 200, body: { inline_string: "ok it is working now" } }
- match:
prefix: "/heal"
direct_response: { status: 200, body: { inline_string: "ok heal is working now" } }
- match:
prefix: "/"
#headers:
# - name: te
# exact_match: "trailers"
route: {
prefix_rewrite: "/",
cluster: envoy_service
}
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.cors
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.grpc_web
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: envoy_service
connect_timeout: 0.25s
type: strict_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: envoy_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: seshat-app-server-headless
port_value: 8000
I have a python grpc server which recieves a proto message and publishes it to a topic in pub/sub. My python grpc server looks like this:-
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
master_pb2_grpc.add_EventBusOneofServiceServicer_to_server(
EventBusServiceServicer(), server
)
server.add_insecure_port("0.0.0.0:8000")
server.start()
print("server started")
def handle_sigterm(*_):
print("Received shutdown signal")
all_rpcs_done_event = server.stop(30)
all_rpcs_done_event.wait(30)
print("Shut down gracefully")
signal(SIGTERM, handle_sigterm)
server.wait_for_termination()
My grpc python client looks like this:-
class ExampleServiceClient(object):
def __init__(self):
"""Initializer.
Creates a gRPC channel for connecting to the server.
Adds the channel to the generated client stub.
Arguments:
None.
Returns:
None.
"""
self.channel = grpc.secure_channel("domain name", grpc.ssl_channel_credentials(), options=(('grpc.enable_http_proxy', 0),))
self.stub = master_pb2_grpc.EventBusOneofServiceStub(self.channel)
def receiveEvent(self, request):
"""Gets a user.
Arguments:
name: The resource name of a user.
Returns:
None; outputs to the terminal.
"""
try:
print(request)
response = self.stub.ReceiveOneofEvent(request)
print("User fetched.")
print(response)
except grpc.RpcError as err:
print(err)
print(err.details()) # pylint: disable=no-member
print("{}, {}".format(err.code().name, err.code().value)) #
if __name__ == "__main__":
os.environ['GRPC_TRACE'] = 'all'
os.environ['GRPC_VERBOSITY'] = 'DEBUG'
if os.environ.get('https_proxy'):
print("yes proxy present")
del os.environ['https_proxy']
if os.environ.get('http_proxy'):
print("yes proxy present")
del os.environ['http_proxy']
for x in range(1,2):
client = ExampleServiceClient()
from google.protobuf.json_format import Parse, MessageToJson
msg = master_pb2.ReceiveOneofEventRequest()
msg.r.first_name = "a"
msg.r.last_name = "b"
msg.r.email = "c"
client.receiveEvent(msg)
When I make calls using client, I get the error:-
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Stream removed"
debug_error_string = "{"created":"@1667812262.198477000","description":"Error received from peer ipv4:IP:443","file":"src/core/lib/surface/call.cc","file_line":967,"grpc_message":"Stream removed","grpc_status":2}"
>
Stream removed
UNKNOWN, (2, 'unknown')
The thing is even with this exception being thrown, the server is able to process the grpc client request. The message is successfully pushed to pub/sub topic but instead of returning the desired repsonse it throws this error on client. What might be the reason for the same?