0

I have a scenario where I need to have two instances of an app container run within the same pod. I have them setup to listen on different ports. Below is how the Deployment manifest looks like. The Pod launches just fine with the expected number of containers. I can even connect to both ports on the podIP from other pods.

kind: Deployment
metadata:
  labels:
    service: app1-service
  name: app1-dep
  namespace: exp
spec:
  template:
    spec:
      contianers:
        - image: app1:1.20
          name: app1
          ports:
          - containerPort: 9000
            protocol: TCP
        - image: app1:1.20
          name: app1-s1
          ports:
          - containerPort: 9001
            protocol: TCP

I can even create two different Services one for each port of the container, and that works great as well. I can individually reach both Services and end up on the respective container within the Pod.

apiVersion: v1
kind: Service
metadata:
  name: app1
  namespace: exp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 9000
  selector:
    service: app1-service
  sessionAffinity: None
  type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
  name: app1-s1
  namespace: exp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 9001
  selector:
    service: app1-service
  sessionAffinity: None
  type: ClusterIP

I want both the instances of the container behind a single service, that round robins between both the containers. How can I achieve that? Is it possible within the realm of services? Or would I need to explore ingress for something like this?

Shudipta Sharma
  • 5,178
  • 3
  • 19
  • 33
Naga
  • 11
  • 1
  • 3
  • Why do you have it set up this way, instead of having a Deployment manage multiple Pod replicas with one copy of the container each? (...in which case an ordinary Service would round-robin across the replicas, with no special configuration.) – David Maze Jul 21 '19 at 10:42
  • Multiple pod replicas is how we currently have it. However, there is an app that consumes about 50G or so RAM for data mmapped into memroy but only a 10th of system cpu. I am trying to get more container instances in a pod vs pod replicas to better utilize cluster resources. This way two instances of app use same shared memory as they both eventually want some part of that 50G accessible. All of this without touching the app to actually make use of more CPU resources and throughput. – Naga Jul 21 '19 at 19:26
  • But why not just have them be separate Deployments or ReplicaSets with different replica counts? This way, it opens up the possibility to have different node annotations for different sized nodes and then you can make affinity rules for the different Deployments to pin the higher memory usage Pods on higher memory nodes and vice-versa. – Andy Shinn Jul 21 '19 at 21:09
  • We do use separate deployments with each having multiple replicase. The problem i am trying to solve is reducing memory footprint. if i have 5 replicas, that would be 5 x 50Gb pods for example that will be created and may/may not be on single node. So, if i had two instances of app container that can use the 50Gb shared memory, i ould potentially reduce the replica count and reduce resource usage on the cluster. One option i will be looking into though is trying to pin these app pods onto single node via affinity rules and look at interpod IPC sharing. – Naga Jul 21 '19 at 22:10

3 Answers3

1

Kubernetes services has three proxy modes: iptables (is the default), userspace, IPVS.

  • Userspace: is the older way and it distribute in round-robin as the only way.
  • Iptables: is the default and select at random one pod and stick with it.
  • IPVS: Has multiple ways to distribute traffic but first you have to install it on your node, for example on centos node with this command: yum install ipvsadm and then make it available.

Like i said, Kubernetes service by default has no round-robin. To activate IPVS you have to add a parameter to kube-proxy

--proxy-mode=ipvs

--ipvs-scheduler=rr (to select round robin)

Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
EnzoAT_
  • 393
  • 1
  • 4
  • Ipvs is what we have setup by default. And yes ipvs is to enable better load balancing. But the case i talk about above is needing a way to map one source port to multiple target ports. And i can find a working solution so far. – Naga Jul 21 '19 at 21:50
  • Services connect to endpoints and endpoints are defined by address and port so should be no problem to use more than one endpoint with the same address. – EnzoAT_ Jul 22 '19 at 08:56
0

One can expose multiple ports using a single service. In Kubernetes-service manifest, spec.ports[] is an array. So, one can specify multiple ports in it. For example, see bellow:

apiVersion: v1
kind: Service
metadata:
  name: app1
  namespace: exp
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 9000
  - name: http-s1
    port: 81
    protocol: TCP
    targetPort: 9001
  selector:
    service: app1-service
  sessionAffinity: None
  type: ClusterIP

Now, the hostname is same except the port and by default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.

Shudipta Sharma
  • 5,178
  • 3
  • 19
  • 33
  • I should have clarified that i need a single IP and port that has the both container instances behind it. I have tried what youve suggested, but with this, i will need something like ingress that would load balance between both service ports. I am trying to see if there is something in the service land. May be a special service and endpoints setup that will get me to work. – Naga Jul 21 '19 at 20:22
0

What I would do is to separate the app in two different deployments, with one container in each deployment. I would set the same labels to both deployments and attack them both with one single service.

This way, you don't even have to run them on different ports.

Later on, if you would want one of them to receive more traffic, I would just play with the number of the replicas of each deployment.

suren
  • 7,817
  • 1
  • 30
  • 51