Let's say you are using either ServiceFabric or Kubernetes, and you are hosting a transaction data warehouse microservice (maybe a bad example, but suppose all it dose is a simple CQRS architecture consisting of Id of sender, receiver, date and the payment amount, writes and reads into the DB).
For the sake of the argument, if we say that this microservice needs to be replicated among different geographic locations to insure that the data will be recoverable if one database goes down.
Now the naïve approach that I'm thinking is to have an event which gets fired when the transaction is received, and the orchestrator microservice will except to receive event-processed acknowledgment within specific timeframe. But the question stays that what about the database ? what will happen when we will scale out the microservices and a new microservice instances will be raise up? they will write to the same database, no ?
One of solutions can be to put the database within the docker, and let it be owned by each replica, is this a good solution?
Please share your thoughts and best practices.