I'm trying to get a fairly simple test scenario to work - I'd like to create a long-lived bidirectional streaming rpc that may sit idle for long periods of time (electron app with local server).
A Node gRPC client starts a C# gRPC server locally and initiates a bidirectional stream. The streaming service receives each message, waits 50 ms, and sends it back.
The Node client test code is set up to send 5 messages, wait 30 seconds, and then send 5 more messages. The first 5 messages successfully roundtrip. The second 5 messages eventually roundtrip, but not until 5 minutes later. The server side code is not hit during this time.
I'm sure I'm being a baboon here, but I don't understand why the connection seems to be dying so fast. I'm also not sure what options could help here, if any. It seems like keepalive
is intended for tracking whether the TCP connection is still alive, but doesn't actually help keep it alive. idleTimeout
doesn't seem relevant either, because we're going to TRANSIENT_FAILURE
status according to the enum documentation here.
This discussion from 2016 is close to what I'm trying to do, but the solution was a RYO heartbeat. This grpc-dotnet issue seems to rely on a heartbeat-type solution specific to ASP.NET, which is not currently used.
gRPC server logs:
After the first 5 messages are sent:
transport 000001A7B5A63090 set connectivity_state=4
Start BDP ping err..."Endpoint read failed" (paraphrasing)
5 minutes later right before the second set of 5 messages comes through:
W:000001A7B5AC8A10 SERVER [ipv6:[::1]:57416] state IDLE -> WRITING [RETRY_SEND_PING]
Node library is @grpc/grpc-js
tl;dr How can I keep the connection healthy & working in the case of downtime?