0

I'm trying to improve network performance between gRPC client and server. The client network speed is 1Gps, the

Assuming my server takes 200ms to respond, and I measure the latency in the client.

Now, if the server processing time goes up, say to 700ms for a response. Where the requests will accumulate? Will they stay in the client network queue, or they will still be sent to the server and will wait in the server queue?

In other words, does a grpc client holds a queue for requests, or every request is always sent - which means that the latency does not depends on the server processing time.

And is there a setting for it in grpc-python?

user3599803
  • 6,435
  • 17
  • 69
  • 130

1 Answers1

0

I suggest you to check the Client-side Interceptor and Server-side Interceptor classes.

Also, if you want to debug the requests, you can try to create immediate or with time interval multiple requests using JMeter or Postman Runner.

  • Can you elaborate ? My point is to understand how the networking works. I'll give an example. Assuming the server has infinite amount of RAM. I want the client to push as many requests as it can, regardless of the server processing time. That means, I would rather the requests will fill in the server queue, even if the server is slow. Is the gRPC client queues requests when the server is slow? – user3599803 Nov 13 '20 at 11:11