Assuming you have your domains modeled properly, this seems like an easy fix with Integration Events.
Add an EmployeeIds
table to your Departments Service and a DepartmentIds
table to your Employees service. When you make, break, or change an assignment between an Employee
and a Department
, publish a EmployeeDepartmentUpdated
event that both services subscribe to. Then, each service can process the event and update their own data to keep in sync.
You do NOT want to start putting data into your gateway API, that's not what it's for (and it means that if you have multiple gateways to the same back-end services, only one will know that information).
Embrace Eventual Consistency and your microservices journey will be the better for it!
EDIT:
To your question about the impact of Events on performance and complexity, the answers are "no" and "yes."
First, no I would not expect event-sourcing to have a negative impact on system performance. In fact, their asynchronous nature makes event processing a separate concern from API responsiveness.
I'm sure there are ways to build a Service Oriented Architecture (SOA, of which microservices is essentially a subset) without a messaging plane, but in my experience having one is a fantastic way to let loosely-coupled communication happen.
Any direct calls between services - regardless of protocol (HTTP, gRPC, etc.) means tight coupling between those services. Endpoint names, arguments, etc. are all opportunities for breaking changes. When you use messaging, each service is responsible for emitting backward-compatible events and every other service can choose which events it cares about, subscribe to them, and never have any knowledge of whether the emitting service is running, dead, changed, etc.
To your second question, the answer is absolutely "yes" - event processing is additional complexity. However, it's part of the complexity you sign up for (and far from the worst of it) when you choose a microservices architecture style. Distributed authorization, keeping UI performant with multiple back-end calls organized between multiple services, fault tolerance and health/performance monitoring are all (at least in my experience) bigger challenges.
For the record, we use a hosted instance of RabbitMQ from CloudAMQP.com and it works great. Performance is good, they have lots of scalable packages to choose from, and we've had zero issues with performance or downtime. The latest RabbitMQ 3.8 release now includes OAuth as well so we are currently working to integrate our Authz flows with our message broker and will have a nice end-to-end security solution.