3

somebody could please help me to create a yaml config file for Kubernetes in order to face a situation like: one pod with 3 containers (for example) and these containers have to be deployed on 3 nodes of a cluster (Google GCE).

|P|      |Cont1| ----> |Node1|
|O| ---> |Cont2| ----> |Node2| <----> GCE cluster
|D|      |Cont3| ----> |Node3|

Thanks

suikoy
  • 2,229
  • 3
  • 19
  • 23
  • https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ As they say in the documentation here: The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. This means a pod's containers are always on the same node. If you want to scale horizontally, you could put the containers into seperate pods. – Dries De Rydt Nov 10 '17 at 16:43
  • it seems strange that I cannot deploy some containers above more cluster nodes considering that in this way I cannot easily scale a web server battery exploiting the elasticity of the cluster and the kubernetes utility to make some operations in one shot... – suikoy Nov 10 '17 at 17:01
  • Well, in general the components do not need to scale in equal measure. Your database can handle a certain load, but maybe your web page itself needs to scale more to handle the same amount of users. In this case it is very beneficial to have them able to scale seperately. If you keep them in the same pod, they always have to scale horizontally by exactly the same degree. Why do you want to bundle them together? – Dries De Rydt Nov 10 '17 at 19:10
  • in fact the idea was to have a pod for db and a different pod for web server. But I was thinking to develop in this way because I was convinced that I could use the cluster to manage with all web server as a single service. On the other hand, what is the meaning of having a node cluster if a pod with multiple containers can only be deployed on only one node?Considering that cluster nodes have the same hardware features, having multiple containers in the same node means weighing one and leaving the other discharged. Now I really do not understand the usefulness of having cluster+kubernetes... – suikoy Nov 10 '17 at 20:24
  • Well let's say your node has 2 pods running your web service. Your cluster will balance the load between both servers and now you can handle more traffic. Smaller nodes are also often cheaper than having one big monolith. Additionally, if you are running two webservers and one crashes, you can still serve traffic from the backup. Kubernetes will also restart services that go down. You should research terms like availability and horizontal scalability. It also lets you use hardware more efficiently. If you have two small services, they can fit on one node and you save on costs. – Dries De Rydt Nov 10 '17 at 22:17
  • ok but these things could be done also with cloud webservices (aws or gce) using only VM... I understand that I can develop a multi-tier architecture with cluster&kubernetes also, but I don't see a great advantage of this solution. Furthermore having an extra level of abstraction (node -> pod -> code), can make the configurations more complex than the virtual machine approach (node -> code) – suikoy Nov 11 '17 at 12:03

2 Answers2

1

From Kuberenets Concepts,

Pods in a Kubernetes cluster can be used in two main ways: Pods that run a single container. The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly. Pods that run multiple containers that need to work together. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service–one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.

In short, most likely, you should place each container in a single Pod to truly benefit from the microservices architecture vs the monolithic architecture commonly deployed in VMs. However there are some cases where you want to consider co-locating containers. Namely, as described in this article (Patterns for Composite Containers) some of the composite containers applications are:

  • Sidecar containers

extend and enhance the "main" container

  • Ambassador containers

proxy a local connection to the world

  • Adapter containers

standardize and normalize output

Once you define and run the Deployments, the Scheduler will be responsible to select the most suitable placement for your Pods, unless you manually assign Nodes by defining Labels in Deployment's YAML (not recommended unless you know what you're doing).

Khaled
  • 8,255
  • 11
  • 35
  • 56
0

You can assign multiple containers to a single pod. You can assign pods to a specific node-pool. But I am not sure whether is it possible to assign multiple containers to multiple nodes running in side a single pod.

What you can do here is to assign each container to different pods (3 containers --> 3 pods) and then assign each pod to a different node-pool by adding this code to your deployment's .yaml file.

nodeSelector:
   nodeclass: pool1