3

Currently I have a Java web app that is backed by half-a-dozen-or-so microservices, where each microservice communicates with 1+ backing resources (DBs, 3rd party REST services, CRMs, legacy systems, JMS, etc.). Each one of these components lives on 1+ VMs. Hence the architecture is as follows:

  • myapp.war lives on both myapp01.example.com and myapp02.example.com
    • Connects to dataservice.war living on dataservice01.example.com and dataservice02.example.com, which connects to mysql01.example.com
    • myapp.war also connects to crmservice.war living on crmservice01.example.com, which connects to http://some-3rd-part-crm.example.com

Now say I wanted to "Dockerify" my whole app architecture. Would I write 1 Docker image for each type of component (myapp, dataservice, mysql, crmservice, etc.) or would I write one "monolithic" container containing all apps, services, DBs, message brokers (JMS), etc.?

I'm sure I could do it either way, but the root of my question is this: Are Docker containers intended to house/contain a single app, or are they intended to represent an entire environment, comprised of multiple interconnected apps/services?

smeeb
  • 27,777
  • 57
  • 250
  • 447

1 Answers1

3

Docker philosophy definitely dictates to create separate Dockerfiles for every application, service or backing resource you have and then link them, for example.

You can use Docker Compose to run different Docker containers together: Django and Rails examples.

Also, tools like kubernetes or ECS allow you to manage full lifecycle and infrastructure of your entire environment, including auto scaling, load balancing, etc.

sap1ens
  • 2,877
  • 1
  • 27
  • 30