3

How can I measure the full time grpc-python takes to handle a request? So far the best I can do is:

def Run(self, request, context):
   start = time.time()
   # service code...
   end = time.time()
   return myservice_stub.Response()

But this doesn't measure how much time grpc takes to serialize the request, response, to transfer it over the network.. and so on. I'm looking for a way to "hook" into these steps.

user3599803
  • 6,435
  • 17
  • 69
  • 130

2 Answers2

0

You can measure on the client side:

start = time.time()
response = stub.Run(request)
total_end_to_end = time.time() - start

Then you can get the total overhead (serialization, transfer) by reducing the computation of the Run method.

To automate the process, you can add (at least for the sake of the test) the computation time as a field to the myservice_stub.Response.

Mark Loyman
  • 1,983
  • 1
  • 14
  • 23
  • I've already done that, though this will include network + overhead in the server. I hope to measure the overhead in the server alone – user3599803 Oct 06 '20 at 08:39
0

You can use an interceptor for this. the package grpc-interceptors simplifies creating interceptors.

import logging
import time
from typing import Callable, Any
import grpc

class RequestLoggingInterceptor(ServerInterceptor):
    def intercept(
        self,
        method: Callable,
        request: Any,
        context: grpc.ServicerContext,
        method_name: str,
    ) -> Any:
        start_time = time.monotonic()
        ip = context.peer().split(":")[1]

        try:
            start = datetime.now()
            return method(request, context)
        finally:
            request_time = time.monotonic() - start_time
            logging.info(
                f"{ip} -- {datetime.now()} -- {method_name} -- {request_time}s"
            )
Douglas Bett
  • 117
  • 1
  • 6