0

I'm following this tutorial to expose my Kubernetes dashboard via an Ingress and secure the ingress with basic authentication using oauth2 proxy with GitHub as the identity provider.

Basically, any requests to access my Kubernetes dashboard (at host kubernetes.vismark.home) should be redirected to the GitHub login screen, the user will then enter valid GitHub creds, and if successful, will be redirected to my Kubernetes dashboard.

I have an oauth2-proxy.yaml file that creates a simple Deployment, Service and Ingress for oauth2 proxy, as shown below, which specifies my GitHub application config:

// oauth2-proxy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: oauth2-proxy
  name: oauth2-proxy
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: oauth2-proxy
  template:
    metadata:
      labels:
        k8s-app: oauth2-proxy
    spec:
      containers:
      - args:
        - --provider=github
        - --email-domain=*
        - --upstream=file:///dev/null
        - --http-address=0.0.0.0:4180
        env:
        - name: OAUTH2_PROXY_CLIENT_ID
          value: 3d00820d20ac2d5494f3 # Our client ID
        - name: OAUTH2_PROXY_CLIENT_SECRET
          value: XXXXXXXX
        - name: OAUTH2_PROXY_COOKIE_SECRET
          value: XXXXXXXX
        image: quay.io/oauth2-proxy/oauth2-proxy:latest
        imagePullPolicy: Always
        name: oauth2-proxy
        ports:
        - containerPort: 4180
          protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: oauth2-proxy
  name: oauth2-proxy
  namespace: kubernetes-dashboard
spec:
  ports:
  - name: http
    port: 4180
    protocol: TCP
    targetPort: 4180
  selector:
    k8s-app: oauth2-proxy

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: oauth2-proxy
  namespace: kubernetes-dashboard
spec:
  ingressClassName: nginx
  rules:
  - host: kubernetes.vismark.home
    http:
      paths:
      - path: /oauth2
        pathType: Prefix
        backend:
          service:
            name: oauth2-proxy
            port:
              number: 4180

The Deployment, Service and Ingress for the oauth2 proxy work as expected -- after creating the objects I can access http://kubernetees.vismark.home/oauth2 and am prompted to sign in.

My issue is with the Ingress yaml that exposes the Kubernetes Dashboard, kubernetes-ingress.yaml:

// kubernetes-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-ingress
  annotations:
    nginx.ingress.kubernetes.io/auth-url: "https://oauth2-proxy.kubernetes-dashboard.svc.cluster.local/oauth2/auth"
    nginx.ingress.kubernetes.io/auth-signin: "https://oauth2-proxy.kubernetes-dashboard.svc.cluster.local/oauth2/start?rd=$escaped_request_uri"
spec:
  rules:
  - host: kubernetes.vismark.home
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 80

The kubernetes-ingress.yaml has two annotations that specify that the oauth2-proxy Service should be used to prompt the user for authentication.

But when I try to access the host "http://kubernetes.vismark.home", I get a 503 error.

The kubernetes-ingress.yaml file exposes my dashboard just fine when I remove the two oauth2-proxy annotations, but as soon as I add the two annotations back in, I get the 503.

I believe this kubernetes-proxy.yaml file is where I misconfigured something, I just don't know what that something is.

Below is a screenshot of my GitHub application configs for additional context:

  • Homepage URL: https://kubernetes.vismark.home
  • Authorization Callback URL: https://kubernetes.vismark.home/oauth2/callback
  • Hi Vismark Juarez welcome to S.F. Saying that you get a 503 isn't specific enough; what, exactly, is nginx trying to contact that it cannot? You'll want to check the ingress controller's logs and look for that 503. And, of course, ensure that those oauth2 routes are happy, also. Good luck! – mdaniel Mar 05 '23 at 20:10
  • While unfamiliar with those `nginx.ingress.kubernetes.io/auth` annotations, using names in `.cluster.local` is probably wrong here (in-SDN, not accessble by your client). While the doc does suggest this should be a name resolvable by your client -- which, would make more sense anyway... Try to change those name by `kubernetes.vismark.home` – SYN Mar 05 '23 at 23:01

0 Answers0