2

I was reading about the performance of gRPC and found a couple of interesting benchmarks:

gRPC is capable of processing approx. 36K requests per second using a single core server and approx. 62K requests per second using two cores (using the Java implementation).

enter image description here

When it comes to latency we're looking at 77ms for p99 which is not acceptable when sub millisecond latency is required.

enter image description here

I thought the latency/throughput can be improved dramatically when two or more servers communicate over the same network.

Does gRCP open a local TCP connection by default when available? Can I assume the latency/throughput would be dramatically faster than the benchmarks shown in such cases?

Thanks

  • 2
    It is a bit hard to tell via a quick skim and without p50, but it seems the linked benchmark used a single benchmark and measured both throughput and latency. That's fundamentally broken as to find throughput limit you need to saturate which will have poor latency. The official gRPC benchmarks show ~130µs for a noop RPC on an unloaded TLS connection over a local network. https://grpc.io/docs/guides/benchmarking/ – Eric Anderson Mar 22 '22 at 22:50

0 Answers0