1

Why is it bad practice to share libraries between microservices? Let's say that I want to share a domain model between two microservices (they have the same bounded context, the original microservice was simply split into two smaller ones due to its size). What is wrong with this approach? Changing the domain model won't break anything, as the consumers of the library used a specific version of it?

MasterLu32
  • 63
  • 5
  • 1
    Why do you think it's a bad practice? – azurefrog Dec 05 '19 at 20:25
  • 1
    It's not *intrinsically* wrong (delta it's unclear what you mean by "sharing libraries"). I'd say it's *very* common to "share" libraries, here by "share" I mean "these microservices have common dependencies". If they're actual *shared* libraries, e.g., a shared artifact where each service doesn't have its own copy, that breaks the microservice model in that they're no longer completely self-contained. – Dave Newton Dec 05 '19 at 20:38
  • @azurefrog well, there're quite a few threads here on SO that take this as the accepted answer, like [sourc1](https://softwareengineering.stackexchange.com/questions/290922/shared-domain-model-between-different-microservices) or [source2](https://stackoverflow.com/questions/50400384/in-the-microservices-architecture-why-they-say-is-bad-to-share-rest-client-libr) – MasterLu32 Dec 05 '19 at 21:06

1 Answers1

3

There is no problem if two microservices share a 3rd party library, and in fact this happens all the time. Many use the same service framework, logging framework, common libs from apache, google, etc.

The problem happens when microservice teams share a 3rd party library that they can modify to suit their own purposes, because if the can, then they eventually will. Requirements from many services will end up getting pushed down into the library, its purpose will get confused and difficult to state. It's code will bloat.

Service teams will then regularly have to modify the library in the course of their everyday business. Because the library serves many masters, then, they will have to do it very carefully... talk to stakeholders... make sure they don't break anyone else's stuff... Sheesh! The library becomes like a little monolith.

If you share that one library, you'll share others too. Eventually they'll all be little monoliths and your whole architecture will be a monolith made even more annoying by splitting it into several repositories.

-- (added for comments):

Now, you suggest that this problem doesn't happen as long as microservices are depending on a specific version of the library. This doesn't solve the problem, though. It just moves the work around.

Lets say that you depend on a specific version of the library, and 6 teams have made their own modifications to the library since that version. Following your advice, none of teams bothered to talk to your team about it, so now it's a mess and you have a choice:

  1. Spend all the time required to fix any problems that their changes might have caused in your service (and don't bother to talk to them about it, so they'll have to do the same thing on their next upgrade), and then upgrade; or
  2. Just fork the library to get rid of all their changes and fix just your own problems.

Choice 2 is the right choice, but almost nobody does this! Because all services share the same library, people think that it's some kind of business rule that they share the same library.

Since people are very reluctant to fork the library after the pattern of sharing it is established, it's better to fork it at the start, i.e, just don't share it around in the first place.

Matt Timmermans
  • 53,709
  • 3
  • 46
  • 87
  • I don't really see your point. As long as microservices use a certain version of a library, other microservices won't affect them. Changes simply have to be made in a new version of the library, but no other microservice is forced to change its library's version? The only drawback I see is, that a library will be developed on many branches, but is this really a problem? – MasterLu32 Dec 06 '19 at 08:34
  • Same for library's API DTOs, but here it's even less of a problem, as they are usually only dumb POJOs and the library's consumers can either simply ignore the non-needed fields or again not change the library's version. Sure the DTOs get a bit bloated, but does this justify all the boilerplate code? – MasterLu32 Dec 06 '19 at 08:40
  • 1
    I added some text about version pinning. API DTOs are generally OK to share out of service, since nobody can really change them without changing the API. They are language-specific, though, and they have a lot of boilerplate, so it's usually better to generate the DTOs from an OpenAPI spec (or vice-versa), and share the OpenAPI spec instead. – Matt Timmermans Dec 06 '19 at 13:56