0

I am deploying angular application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error. The application was working fine until I made some recent functional changes and redeploy using the yaml/config files that was used for the initial deployment. I'm clueless what is wrong here

Note:

1.This is not the duplicate of 72064326, as the server is listening to correct port on nginx.conf

Here are my files

1.Docker file

# stage1 as builder
FROM node:16.14.0 as builder


FROM nginx:alpine

#!/bin/sh
## Remove default nginx config page
# RUN rm -rf /etc/nginx/conf.d

# COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf

COPY ./.nginx/nginx.conf /etc/nginx/conf.d/default.conf

## Remove default nginx index page
RUN rm -rf /usr/share/nginx/html/*

# Copy from the stahg 1
COPY dist/appname  /usr/share/nginx/html

EXPOSE **8080**

ENTRYPOINT ["nginx", "-g", "daemon off;"]

nginx.conf (custom nginx)

  server {   
        listen 8080;
          
        root /usr/share/nginx/html;
        include /etc/nginx/mime.types;

    
  location /appname/ {  
    root /usr/share/nginx/html;
    index index.html index.htm;
    try_files $uri $uri/ /index.html =404;
  }

  location ~ \.(js|css) {
    root /usr/share/nginx/html;
     # try finding the file first, if it's not found we fall
     # back to the meteor app
    try_files $uri  /index.html =404;
  }
}

3.Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    com.xxx.path: /platform/custom
  name: appname 
  namespace: yyyyyy
  
spec:
  selector:
    matchLabels:
      io.kompose.service: appname 
  replicas: 1
  template:
    metadata:
      labels:
        clusterName: custom2
        department: customplatform
        io.kompose.service: appname 
        com.xxxx.monitor: application
        com.xxxx.platform: custom
        com.xxxx.service: appname
    spec:
      containers:
      - env:
        - name: ENVIRONMENT
          value: yyyyyy
        resources:
           requests:
             memory: "2048Mi"     
           limits:
             memory: "4096Mi"
        image: cccc.rrr.xxxx.aws/imgrepo/imagename:latest 
        imagePullPolicy: Always
        securityContext:
        name: image
        ports:
        - containerPort: 8080
      restartPolicy: Always


Service.yaml

kind: Service
metadata:
  annotations:
    com.xxxx.path: /platform/custom
  labels:
    clusterName: custom2
    department: customplatform
    io.kompose.service: appname
    com.xxxx.monitor: application
    com.xxxx.platform: custom
    com.xxxx.service: appname
  name: appname
  namespace: yyyyyy
spec:
  ports:
  - name: "appname"
    port: **8080**
    targetPort: 8080 
  selector:
    io.kompose.service: appname

5.Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: custom-ingress
 namespace: yyyyyy
 annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-redirect-from: "http://custom-yyyyyy.dev.xxxx.aws:8080/" 
    nginx.ingress.kubernetes.io/proxy-redirect-to: "$scheme://$http_host/"
spec:
  rules:
  - host: custom-yyyyyy.dev.xxxx.aws
    http:
      paths:
      - backend:
          serviceName: appname
          servicePort: 8080
        path: /appname


```[![appliction screenshot][1]][1]


  [1]: https://i.stack.imgur.com/CX3k1.png
suku
  • 201
  • 1
  • 3
  • 9

1 Answers1

1

The screenshot you have attached shows an nginx error. Initially I thought it meant that it was a configuration error on your pod (an error in the actual container).

But then I noticed you are using an NGINX ingress controller, so most likely the issue is in the ingress controller.

I would proceed mechanically as with anything related with Kubernetes ingress.

In particular:

  1. Check the logs on the ingress controller, for error messages. In particular, I don't have experience with NGINX ingress controller, but health checking in mixed protocols (https external, http in the service) tend to be tricky. With ALB controller, I always check that the target groups have backend services. And in your case I would first test without the redirect-from and redirect-to annotations. Again I haven't used NGINX controller but "$scheme://$http_host/" looks strange.
  2. Check that the service has endpoints defined (kubectl endpoints appname -n yyyyyy) which will tell you if the pods are running and if the service is connected to the pods.
Gonfva
  • 1,428
  • 16
  • 28
  • Ok. I found out the pod is running how ever the container never got created.. While redeploying I saw the pod is in "ContainerCreating" (logs indicate that container waiting for start msg) state for more 15 mins and then just POD runs but container never created. I couldn't find why the container is not created. Any idea what could be the reason for container not getting created? – suku Aug 14 '22 at 17:04
  • 1
    Identified the issue. It appears to be container entered to error state due to an enterprise level issue (outside of my control). Since the application was launched even before the container is created, kubernetes relays 502 error from the nginx. More dtls here [https://komodor.com/learn/how-to-fix-kubernetes-502-bad-gateway-error/#:~:text=A%20502%20Bad%20Gateway%20error,a%20proxy%20or%20gateway%20server]. Worked with sre teams to resolve the enterpise level infrastructure issue and now the app is working fine. – suku Aug 16 '22 at 21:19