0

I have two applications, each of which use several databases. Before the days of Docker, I would have just put all the databases on one host (due to resource consumption associated with running multiple physical hosts/VMs).

Logically, it seems to me that separating these into groups (1 group of DBs per application) is the right thing to do and with containers the overhead is low and this seems possible. However, I have not seen this use case. I've seen multiple instances of containerized Postgres running so as to maintain multiple versions (hence different images).

Is there a good technical reason why people do not do this (two or more containers of PostgreSQL instances using the same image for purposes of isolating groups of DBs)?

When I tried to do this, I ran into errors having to do with the second instance trying to configure the postgres user. I had to pass in an option to ignore migration errors. I'm wondering if there is a good reason not to do this.

user3814483
  • 292
  • 1
  • 2
  • 13

1 Answers1

1

Well, I am not used to work with prosgresql but with mysql, sqlite and ms sql - and docker, of course.

When I entered docker I used to read a lot about microservices, developing of these and, of course, the devops ideas behind docker and microsoervices.

In this world I would absolutly prefer to have 2 containers of the same base image with a multi stage build and / or different env-files to run you infrastructure. Docker is not only allowing this, it is prefering this.

PassionateDeveloper
  • 14,558
  • 34
  • 107
  • 176