0

I'm currently trying to create a tracing tool for fun (which supports gRPC tracing) and was confused as to whether or not I was thinking about this architecture properly. A tracing tool keeps track of the entire workflow/journey of the request (from the moment a user clicks the button, to when the request goes to the API gateway, between microservices, and back.

Let's say the application is a bookstore, and it is broken up to 2 microservices, maybe account and books. Let's say that there is a User Interface, and when you click a button, it allows a user to favorite a book. I'm only using 2 microservices to keep this example simple.

**Different parts of the Fake/Mock up application**
UI -> 
nginx -> I wanted to use this as an API Gateway.
microservice 1 -> (Contains data for all Users of a bookstore) 
microservice 2 -> (Contains data for all the books) 

**So my goal is to figure a way to trace that request. So we can imagine the request goes to nginx

enter image description here

Concern #1: When the request goes to nginx, it is HTTP. Cool, but when the request is sent to the microservice, it is a grpc call (or over http2). Can nginx get an http request and then send that request over http2...? Not sure if I'm wording this correctly or not. I know nginx plus supports http2. I also know that grpc has a grpc gateway too.

Concern #2: Containerization. Do I have to containerize both microservices individually, or would I have to containerize the entire docker container itself. Is it simple to link nginx and docker?

Concern #3: When tracing gRPC requests (finding out how much time a request is fulfilled), I'm considering using a middleware logger or a tracing API (opentracing, jaegar, etc.) to do this. How else would I figure out how long it takes for gRPC to make requests?

I was wondering if it was possible to address these concerns, if my thought process is correct, and if this architecture is feature.

TheRoadLessTaken
  • 531
  • 2
  • 7
  • 15

1 Answers1

0

Most solutions in the industry are implemented on top of a container orchestration solution (Kubernetes, Docker Swarm, etc).
It is usually not a good idea to "containerize" and manage reverse proxy yourself.
The reverse proxy should be aware of all the containers status (by hooking to orchestrator) and dynamically update its configuration when a container created, crashed, or relocated (due to a machine gets out of service).
Kubernetes handles GRPC using the mesh networks. Please take a look at kubernetes service mesh.
If you decided to use Traefik and Docker Swarm check out traefik h2c support.
In conclusion, consider more modern alternatives to Nginx when you want to load balance GRPC.

HosseinAgha
  • 759
  • 6
  • 18