I came across some applications where they say they use Event Driven Architecture to communicate between microservices (Kafka as event broker) and also a Service Mesh (LinkerD)
There are a couple of questions I have not found answers for:
- As I read one of the main features of a Service Mesh (in this case linkerD) is to help service-to-service communication (service discovery, retry, circuit breaking, ..). If an application uses kafka as an event broker to communicate between microservices, how does the Service Mesh come into the picture?
Let's say we have ServiceA and ServiceB (both with multiple deployments / nodes). If ServiceA wants to talk to ServiceB, it can produce to a Kafka topic, and ServiceB can subscribe. How can a Service Mesh be present in this communications, and how can it improve the communication?
- If we have multiple deployments of ServiceB because of the load, how does load balancing happen here? If each deployment has a "sidecar" proxy, how do they decide how to read from Kafka, and which partitions the particular nodes read? Do they operate as a consumer group?