0

I have a cluster running 2 deployments and an ingress (caddy). One of my deployments is working fine, its a golang image listening on 80. The other deployment is a php-fpm image listening on 9000, when I make any request to the php-fpm domain, it responds with 502.

php-fpm deployment and service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: main-api-deployment
  labels:
    app: main-api
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 2
  selector:
    matchLabels:
      app: main-api
  template:
    metadata:
      labels:
        app: main-api
    spec:
      containers:
        - name: main-api
          image: 
          ports:
            - containerPort: 9000
          envFrom:
            - configMapRef:
                name: main-api
apiVersion: v1
kind: Service
metadata:
  name: main-api-service
spec:
  selector:
    app: main-api
  ports:
    - name: fpm
      port: 9000
      targetPort: 9000

ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: caddy
spec:
  rules:
  - host: 
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: go-service-service
            port:
              number: 80
  - host: 
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: main-api-service
            port:
              number: 9000

running a curl container in another pod and trying to connect to the main-api container internally:

/ $ curl 10.244.0.126:9000 -v
*   Trying 10.244.0.126:9000...
* Connected to 10.244.0.126 (10.244.0.126) port 9000 (#0)
> GET / HTTP/1.1
> Host: 10.244.0.126:9000
> User-Agent: curl/8.0.1-DEV
> Accept: */*
> 
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

main-api pod description:

Name:             main-api-deployment-7cd9d47886-c5g45
Namespace:        default
Priority:         0
Service Account:  default
Node:             pool-q5an7/10.116.0.6
Start Time:       Thu, 23 Mar 2023 16:59:37 -0300
Labels:           app=main-api
                  pod-template-hash=7cd9d47886
Annotations:      kubectl.kubernetes.io/restartedAt: 2023-03-23T16:59:37-03:00
Status:           Running
IP:               10.244.0.126
IPs:
  IP:           10.244.0.126
Controlled By:  ReplicaSet/main-api-deployment-7cd9d47886
Containers:
  main-api:
    Container ID:   containerd://04b6d8f62295e174ed196b0ad0b3002fe0b37c64faafe9cfd623abb4e98a30c7
    Image:          ...
    Image ID:       ...
    Port:           9000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 23 Mar 2023 16:59:40 -0300
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      main-api    ConfigMap  Optional: false
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-54c4q (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-54c4q:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      

main-api service description:

Name:              main-api-service
Namespace:         default
Labels:            
Annotations:       
Selector:          app=main-api
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.245.139.92
IPs:               10.245.139.92
Port:              fpm  9000/TCP
TargetPort:        9000/TCP
Endpoints:         10.244.0.126:9000
Session Affinity:  None
Events:            

last log in the pod:

[23-Mar-2023 20:01:20] NOTICE: fpm is running, pid 289
[23-Mar-2023 20:01:20] NOTICE: ready to handle connections

The pod has no erros in the logs, never restarted.

ingress description:

Name:             ingress
Labels:           
Namespace:        default
Address:          ...  
Ingress Class:    
Default backend:  
Rules:
  Host                                 Path  Backends
  ----                                 ----  --------
  .com
                                       /   go-service-service:80 (10.244.0.76:80)
  .com
                                       /   main-api-service:9000 (10.244.0.126:9000)
Annotations:                           kubernetes.io/ingress.class: caddy
Events:                                

As I said, one of the containers (go) is working fine, the api is responding correctly. The php-fpm container only returns 502, I couldnt find the problem.

wcb
  • 1
  • Hi wcb welcome to S.F. I'd guess php-fpm is listening on 127.0.0.1:9000 instead of 0.0.0.0:9000 based on your intra-cluster use of curl. If you can curl :9000 from the container itself, or via `kubectl port-forward 9000:9000` then that's the problem – mdaniel Mar 25 '23 at 03:08

0 Answers0