0

I'm trying to understand if grpc server using streams is able to wait for all client messages to be read in prior to sending responses.

I have a trivial application where I send in several numbers I'd like to add and return. I've set up a basic proto file to test this:

syntax = "proto3";


message CalculateRequest{
    int64 x = 1;
    int64 y = 2;
};

message CalculateReply{
    int64 result = 1;
}

service Svc {
    rpc CalculateStream (stream CalculateRequest) returns (stream CalculateReply);
}

On my server-side I have implemented the following code which returns the answer message as the message is received:

class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
    def CalculateStream(self, request_iterator, context):
        for request in request_iterator:
            resultToOutput = request.x + request.y
            yield contracts_pb2.CalculateReply(result=resultToOutput)

def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    contracts_pb2_grpc.add_SvcServicer_to_server(
        CalculatorServicer(), server)
    server.add_insecure_port('localhost:9000')
    server.start()
    server.wait_for_termination()


if __name__ == '__main__':
    print( "We're up")
    logging.basicConfig()
    serve()

I'd like to tweak this to first read in all the numbers and then send these out at a later stage - something like the following:

class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
   listToReturn = []
   def CalculateStream(self, request_iterator, context):
        for request in request_iterator:
            listToReturn.append (request.x + request.y)
        
        # ...
        # do some other stuff first before returning
        
        for item in listToReturn:
           yield contracts_pb2.CalculateReply(result=resultToOutput)

Currently, my implementation to write out later doesn't work as the code at the bottom is never reached. Is this by design that the connection seems to "close" before reaching there?

The grpc.io website suggests that this should be possible with BiDirectional streaming:

for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes.

Thanks in advance for any help :)

1 Answers1

0

The issue here is the definition of "all client messages." At the transport level, the server has no way of knowing whether the client has finished independent of the client closing its connection.

You need to add some indication of the client's having finished sending requests to the protocol. Either add a bool field to the existing CalculateRequest or add a top-level oneof with one of the options being something like a StopSendingRequests

Richard Belleville
  • 1,445
  • 5
  • 7