After the heated debate on the first answer, let me lend some perspective:
One use case that often comes up is how to handle for example authentication information after the request hit the first service which then in turn calls other services. Now the question usually is: do I hand over the authentication-information (like usernames and groups etc.) or do I just hand over the token, that the client sent and let the next service query the authentication information again.
As far as I can tell, the microservice community has not yet agreed upon an "idiomatic" way of solving this problem. I think there is a good reason for that and it lays in the different requirements that various application pose in this subject. Sometimes authentication can only be necessary at the first service that gets hit with an external request - then don't bother putting too much work into authentication. Still most systems I know have higher demands and thus require another level of sophistication on the subject of authentication.
Let me give you my view of how this problem could be solved: The easiest way is to hand around the access-token the client has sent between the back-end services. Yes - this approach requires every service to re-inquire the user-information every time it gets hit with a request. If (and I hoe this does not happen in this amount in your system) there are 25 cross-service calls per request - this most likely means 25 hits on some kind of authentication service. Most people will now start screaming in terror of this horrible duplication - but let's think the other way: If the same system were a well-structured monolith you'd still make these calls (probably hit a DB every single time) at different places in your process. The big deal about these calls in a microservice architecture is the network overhead, and it's true - it will kill you if done wrong! I will give you the solution we took and that worked well under heavy loads for us:
We developed a token service (which we'll be open-sourcing quite soon). This service does nothing else except store a combination of the token, it's expiration date and some schema-less JSON content. It has a very simple REST interface that lets you create, invalidate, extend and read tokens and their content. This service has multiple back-ends that can be configured according to the environment it run in. For development purposes it has a simple in-memory storage that is not synchronized, persisted or replicated in any way. For production environment we wrote a back-end that synchronizes these tokens between multiple instances (including all the stuff like quorums, asynchronous persistence etc.). This back-end enables us to scale this service very well; which is a premise for the solution I'm proposing: If every service nodes has to get the information associated with a token every time it receives a request - the service that provides it has to be really fast! Our implementation return tokens and their information in far less than 5 milliseconds and we're confident we can push this metric down even further.
The other strategy we have is to orchestrate services that make heavier queries to the token-service (receiving the content is expensive compared to just checking a tokens validity/existence) so that their located on the same physical nodes or close by to keep network latency to a minimum.
What is the more general message: Do not be afraid of cross-service calls as long as the number or these calls stay uncoupled from the amount of content that is handled (bad example here). The services that are called more frequently need to be engineered much more carefully and their performance needs to be very optimized to have off the last possible millisecond. DB-Hits in this kind of system-critical services for example are and absolute Nogo - but there are design patterns and architectures that can help you avoid them!!
You may have detected already that I did not directly answer your question to debate. Why? I'm vehemently against having shared databases between services. Even if these data-bases are schema-less you will couple two services together without this dependency being visible. Once you decide to restructure your data in a token-service and there is another service even just reading on this database - you just screwed up two services an you might just realize it when it's too late because the dependency is not transparent. State/Data in services should only be accessed through well-defined interfaces so they can be well abstracted, developed and tested independently. In my opinion, changing the persistence technology or structure in one service should never screw-up or even require changes in another service. Exclusively accessing a service through it's API gives you the possibility to refactor, rebuild or even completely rewrite services without necessarily breaking other services relying on it. It's called decoupling!
Let me know whether this is helpful or not!