1

I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page - enter image description here

However when I try to go through the reverse proxy I see this partial result - enter image description here

Here are the files:

nginx.conf:

events {
  worker_connections  4096;  ## Default: 1024
}

http{
    server {
       listen 81;
       #Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server. 
       set $target "http://127.0.0.1:80/"; 
       location / { 
          proxy_pass $target; 
       }
     }
}

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    ...
  name: ...
spec:
  replicas: 1
  selector:
    matchLabels:
      pod: ...
  template:
    metadata:
      labels:
        pod: ...
      name: ...
    spec:
      containers:
        - image: claudiubelu/nginx:1.15-1-windows-amd64-1809
          name: nginx-reverse-proxy
          volumeMounts:
            - mountPath: "C:/usr/share/nginx/conf" 
              name: nginx-conf
          imagePullPolicy: Always
        - image: some-repo/proprietary-server-including-iis
          name: ...
          imagePullPolicy: Always
      nodeSelector:
        kubernetes.io/os: windows
      imagePullSecrets:
        - name: secret1
      volumes:
        - name: nginx-conf
          persistentVolumeClaim:
            claimName: pvc-nginx

Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/.

Busybox pod (used to access the PVC):

apiVersion: v1
kind: Pod
metadata:
  name: nginx-busybox-pod
  namespace: default
spec:
  containers:
    - image: busybox
      command:
        - sleep
        - "360000"
      imagePullPolicy: Always
      name: busybox
      volumeMounts:
       - name: nginx-conf
         mountPath: "/mnt/nginx/conf"
  restartPolicy: Always
  volumes:
    - name: nginx-conf
      persistentVolumeClaim:
        claimName: pvc-nginx
  nodeSelector:
    kubernetes.io/os: linux

And lastly the PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nginx
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
  storageClassName: azurefile  

Any ideas why?

jrbe228
  • 356
  • 3
  • 11
  • Hope this link could help you: https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/. – TengFeiXie Feb 13 '23 at 06:10
  • No luck yet. The proxy_pass directive should forward request/response with no modifications. Maybe there is some conflict over port 80? Even though the default "listen 80" is explicitly replaced by "listen 81". – jrbe228 Feb 13 '23 at 15:49
  • After reverse proxy there may be problems that image or CSS cannot be loaded. May you can use DevTools to check page resource loading. – TengFeiXie Feb 15 '23 at 09:57
  • Using DevTools, I see Status code 200 for "http://10.0.0.100:81/iisstart.png". No failures reported in the Console. Separately, the nginx container is logging GET requests to "access.log" but nothing to "error.log". – jrbe228 Feb 15 '23 at 18:12
  • Status Code 200 means that the request is successful. Since there are no 404 or other error code, it will be no error records. You can't troubleshoot your problem just base on these information. It is recommended to use FRT to get more detailed information. – TengFeiXie Feb 17 '23 at 09:53
  • Could you send a link to the FRT tool? I'm unfamiliar with it. – jrbe228 Feb 17 '23 at 16:41
  • Failed Request Tracing: https://learn.microsoft.com/en-us/iis/troubleshoot/using-failed-request-tracing/troubleshoot-with-failed-request-tracing. – TengFeiXie Feb 20 '23 at 01:42

1 Answers1

1

After some testing, here is a working nginx.conf -

http{  
    server {  
       listen 81;  
       set $target "http://127.0.0.1:80"; 
       location / { 
          proxy_pass $target; 
          proxy_set_header Host $host;
       }
     }
}
  • New directive - proxy_set_header Host $host;
  • Trailing slash removed from the target variable used by the proxy_pass directive.
  • (Specific to my application) Other endpoints on the server are better reachable using $host:$server_port in place of $host. This is caused by the app server redirecting incoming requests to different URIs, losing the proxy's port (81) in the process.
jrbe228
  • 356
  • 3
  • 11