0

I am creating a deployment with a custom image i have in a private registry, the container will have lots of ports that need to be exposed, i want to expose them with a NodePort service, if i create a service with 1000 UDP ports then create the deployment, the deployment pod will keep crashing, if i delete the service and the deployment then create the deployment only without the service, the pod starts normally.

Any clue why would this be happening ?

Pod Description:

Name:         freeswitch-7764cff4c9-d8zvh
Namespace:    default
Priority:     0
Node:         cc-lab/192.168.102.55
Start Time:   Wed, 01 Jun 2022 15:44:09 +0000
Labels:       app=freeswitch
              pod-template-hash=7764cff4c9
Annotations:  cni.projectcalico.org/containerID: de4baf5c4522e1f3c746a08a60bd7166179bac6c4aef245708205112ad71058a
              cni.projectcalico.org/podIP: 10.1.5.8/32
              cni.projectcalico.org/podIPs: 10.1.5.8/32
Status:       Running
IP:           10.1.5.8
IPs:
  IP:           10.1.5.8
Controlled By:  ReplicaSet/freeswitch-7764cff4c9
Containers:
  freeswtich:
    Container ID:   containerd://9cdae9120cc075af73d57ea0759b89c153c8fd5766bc819554d82fdc674e03be
    Image:          192.168.102.55:32000/freeswitch:v2
    Image ID:       192.168.102.55:32000/freeswitch@sha256:e6a36d220f4321e3c17155a889654a83dc37b00fb9d58171f969ec2dccc0a774
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    139
      Started:      Wed, 01 Jun 2022 15:47:16 +0000
      Finished:     Wed, 01 Jun 2022 15:47:20 +0000
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /etc/freeswitch from freeswitch-config (rw)
      /tmp from freeswitch-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mwkc8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  freeswitch-config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  freeswitch-config
    ReadOnly:   false
  freeswitch-tmp:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  freeswitch-tmp
    ReadOnly:   false
  kube-api-access-mwkc8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                   From     Message
  ----     ------   ----                  ----     -------
  Normal   Pulled   4m3s (x5 over 5m44s)  kubelet  Container image "192.168.102.55:32000/freeswitch:v2" already present on machine
  Normal   Created  4m3s (x5 over 5m43s)  kubelet  Created container freeswtich
  Normal   Started  4m3s (x5 over 5m43s)  kubelet  Started container freeswtich
  Warning  BackOff  41s (x24 over 5m35s)  kubelet  Back-off restarting failed container

Service :

apiVersion: v1
kind: Service
metadata:
  name: freeswitch
spec:
  type: NodePort
  selector:
    app: freeswitch
  ports:
  - port: 30000
    nodePort: 30000
    name: rtp30000
    protocol: UDP
  - port: 30001
    nodePort: 30001
    name: rtp30001
    protocol: UDP
  - port: 30002
    nodePort: 30002
    name: rtp30002
    protocol: UDP
  - port: 30003
    nodePort: 30003
    name: rtp30003
    protocol: UDP
  - port: 30004...... this goes on for port 30999

Deployment :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: freeswitch
spec:
  selector:
    matchLabels:
      app: freeswitch
  template:
    metadata:
      labels:
        app: freeswitch
    spec:
      containers:
        - name: freeswtich
          image: 192.168.102.55:32000/freeswitch:v2
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: freeswitch-config
              mountPath: /etc/freeswitch
            - name: freeswitch-tmp
              mountPath: /tmp
      restartPolicy: Always
      volumes:
        - name: freeswitch-config
          persistentVolumeClaim:
            claimName: freeswitch-config
        - name: freeswitch-tmp
          persistentVolumeClaim:
            claimName: freeswitch-tmp
frisky5
  • 29
  • 10
  • what does `kubectl logs` shows for that failed pod – confused genius Jun 01 '22 at 16:15
  • @confusedgenius it shows the output of the application running which is Freeswitch PBX, the output is totally normal but the container keeps crashing, the logs do not show any errors just the normal output of the application – frisky5 Jun 01 '22 at 16:53
  • It sounds like if you create the deployment first, it runs correctly. If you create the service *afterwards*, does the deployment crash? – larsks Jun 01 '22 at 17:46
  • no, if i create the deployment first it doesn't crash – frisky5 Jun 01 '22 at 18:02

0 Answers0