3

In a microservices architecture what is the best strategy for keeping many developer environments up-to-date across multiple source code repositories?

Suppose there were 10 teams of 10 developers working on 200 microservices in git. Every developer would need to pull regularly from every repository. This could be done with scripts but is there a better way? Are we doing this wrong because it seems like a heavy overhead.

LL020
  • 55
  • 4
  • Is every developer working on every microservice? Wouldn't a developer only be concerned with the microservice(s) they are developing and (maybe) their immediate dependencies? – Pace Nov 19 '15 at 21:26
  • No, they only work on a few at a time but need to keep everything else up to date. Any team could potentially change the code in any repro if they need to. Usually teams would be working on a subset. – LL020 Nov 19 '15 at 22:48
  • Whats the clear separation of teams and ownership here? Do all 10 teams have ownership to 200 microservices? In that scenario, it sounds more like an organizational problem. Without ownership, no developer will understand the entire system of services and how they contribute to the larger product. – Will C Nov 22 '15 at 06:15

2 Answers2

3

I wouldn't advise having every developer build every microservice. I would propose some sort of continuous integration environment. One centralized build server connected to all of the git repos.

Each time a repo is updated the build server should detect the change, build the code, run unit (and or functional) tests, and then push the service to some sort of integration environment. The build server may then also run some integration testing against the deployed service.

Most developers should be able to do all their development and test without needing access to the other microservices. If a developer is building service X, which depends on Y & Z and is depended on by A & B then the developer should, for the most part, only have service X. For unit testing services Y & Z should be mocked/simulated.

The challenge is going to be preventing the developer from breaking services A & B by making a change to service X. That sort of integration testing tends to be trickier as developers working on service X often don't know details (or even how to use) upstream services (e.g. A & B).

The way to tackle that I believe would be to have regular integration testing, either triggered by the build of service X or run on a regular basis. With a project this complicated a strong and robust unit test philosophy and integration test framework is going to be essential.

Pace
  • 41,875
  • 13
  • 113
  • 156
3

This comes down to good your service location/detection technology is. All one service needs to know is where to send a request X in order to get a response Y from a particular service. If this is well implemented. you could very well have each developer run whatever subset of services he is directly working on locally and have the rest of the services run in a common environment (lets call it Remote)

Remote will be configured to run all services in your platform and will have some way to get the latest code (based on cadence or based on regular intervals). Local environments could be configured to that all the services running locally will know what else is running locally and will know how to reach to the Remote services when they need any information. For developer environments, add conventions where each developer run minimum number of services required for productive development. This means if you are not working the code a service, run it remotely so you know you are running the latest code and are not out of date.

Couple of gotchas with this approach

  1. you could run service A locally and have service B running on remote. you test everything out locally and it seems to be working so you decide to push. While you are pushing, another developer also pushes changes to B and now your changes are not longer compatible.
  2. If you are running services A and C locally and B remotely, it should be pretty straight forward to route requests to service B to the remote environment. You should however be careful that if A calls B and B calls C then the call to C needs to be routed from the remote environment to your local C service and not to your Remote C service.
  3. Testing - You can get over a lot of issues related to testing with a complex environment like this by have two separate suite of tests. 1. unit tests- tests that test individual components in your service with all calls to other service being mocked. 2. environment integration tests- these tests validate the communication between different services. Suite 1 will check the internal code of your service and Suite 2 will run after you push your changes to remote and will continuously ensure that the inter-service communication is as expected.

Hope that helps

KnightFox
  • 3,132
  • 4
  • 20
  • 35