0

I don't have too much experience with kubernetes and now I'm facing some issues.

I want to authenticate an app in Kubernetes using Istio Ingress gateway, OAuth2-Proxy and keycloak. The app that I want to authenticate is PgAdmin.

To make this I followed this tutorial: https://medium.com/@senthilrch/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-part-2-of-2-dbb3fb9cd0d0

However, when OAuth2-proxy make the connection with Keycloak I get the next error:

Get "https://keycloak.localtest.me/realms/my-realm/.well-known/openid-configuration": dial tcp 127.0.0.1:443: connect: connection refused

So, changing the issuer for internal DNS, I get the next error:

oidc: issuer did not match the issuer returned by provider, expected "https://keycloak.test.svc.cluster.local/realms/my-realm" got "https://keycloak.localtest.me/realms/my-realm"

I'm not sure what is happening, apparently looks like the OAuth2-Proxy pod is not able to connect to keycloak via Ingress, but I'm not sure.

The steps that I make are the next:

# start minikube
minikube start --cpus 6 --memory 8192

# create dedicated namespace for our deployments
kubectl create ns test

#  install istio
istioctl install

# Evony Proxy Injection
kubectl label ns test istio-injection=enabled

# create TLS cert and secrets in kubernetes.
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout auth-tls.key -out auth-tls.crt -subj "/CN=keycloak.localtest.me/O=test"

kubectl create secret -n test tls auth-tls-secret --key auth-tls.key --cert auth-tls.crt

# deploy PostgreSQL cluster - in dev we will use 1 replica.
helm install -n test keycloak-db bitnami/postgresql-ha --set postgresql.replicaCount=1

# deploy Keycloak cluster
kubectl apply -n test -f keycloak-config.yaml

# Deploy PgAdmin

kubectl apply -n test -f pgadmin-config.yaml

# Create Gateay
kubectl apply -n istio-system -f istio-gateway.yaml

# Apply virtual services.
kubectl apply -n test -f keycloak-virtual-service.yaml
kubectl apply -n test -f pgadmin-virtual-service.yaml 
kubectl apply -n test -f oauth-virtual-service.yaml

# Apply auth Policy and deploy OAuth-Proxy

kubectl apply -n test -f auth-policy.yaml
kubectl apply -n test -f istio-operator.yaml
kubectl apply -n test -f oauth2.yaml

And my config files are next:

# Kaycloak config file.

apiVersion: v1
kind: Service
metadata:
  name: keycloak
  labels:
    app: keycloak
spec:
  ports:
    - name: https
      port: 443
      targetPort: 8443
  selector:
    app: keycloak
  type: ClusterIP
  # clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
        - name: keycloak
          image: quay.io/keycloak/keycloak:20.0.2
          args: ["start", "--cache-stack=kubernetes"]
          volumeMounts:
          - name: certs
            mountPath: "/etc/certs"
            readOnly: true
          env:
            - name: KEYCLOAK_ADMIN
              value: "admin"
            - name: KEYCLOAK_ADMIN_PASSWORD
              value: "admin"
            - name: KC_HTTPS_CERTIFICATE_FILE
              value: "/etc/certs/tls.crt"
            - name: KC_HTTPS_CERTIFICATE_KEY_FILE
              value: "/etc/certs/tls.key"
            - name: KC_HEALTH_ENABLED
              value: "true"
            - name: KC_METRICS_ENABLED
              value: "true"
            - name: KC_HOSTNAME
              value: keycloak.localtest.me
            - name: KC_PROXY
              value: "edge"
            - name: KC_DB
              value: postgres
            - name: KC_DB_URL
              value: "jdbc:postgresql://keycloak-db-postgresql-ha-pgpool/postgres"
            - name: KC_DB_USERNAME
              value: "postgres"
            - name: KC_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: keycloak-db-postgresql-ha-postgresql
                  key: password
            - name: jgroups.dns.query
              value: keycloak
          ports:
            - name: jgroups
              containerPort: 7600
            - name: https
              containerPort: 8443
          readinessProbe:
            httpGet:
              scheme: HTTPS
              path: /health/ready
              port: 8443
            initialDelaySeconds: 60
            periodSeconds: 1
      volumes:
      - name: certs
        secret:
          secretName: auth-tls-secret

# PgAdmin config file

apiVersion: v1
kind: Service
metadata:
  name: pgadmin
spec:
  selector:
    app: pgadmin
  ports:
    - name: http
      port: 80
      targetPort: 80
  type: NodePort
  
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
spec:
  selector:
    matchLabels:
      app: pgadmin
  replicas: 1
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin
          image: dpage/pgadmin4
          ports:
            - containerPort: 80
          env:
            - name: PGADMIN_DEFAULT_EMAIL
              value: admin@pgadmin.com
            - name: PGADMIN_DEFAULT_PASSWORD
              value: admin123
# Istio ingress Gateway config file

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: test-gateway
  namespace : istio-system
spec:
  selector:
    istio: ingressgateway
  servers:

    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - '*'

    - port:
        number: 443
        name: https
        protocol: HTTPS
      tls: 
        mode: PASSTHROUGH
        
      hosts:
        - '*'

# Virtual services files

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: keycloak-vs
spec:
  hosts:
    - keycloak.localtest.me
  gateways:
    - istio-system/test-gateway

  tls:
  - match:
    - sniHosts: ["keycloak.localtest.me"]
    route:
    - destination:
        host: keycloak.test.svc.cluster.local
        port:
           number: 443

----

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: pgadmin-vs
spec:
  hosts:
    - pgadmin.localtest.me
  gateways:
    - istio-system/test-gateway

  http:
  - route:
    - destination:
        host: pgadmin.test.svc.cluster.local
        port:
          number: 80

---

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: oauth2-vs
spec:
  hosts:
    - oauth2-proxy.localtest.me
  gateways:
    - istio-system/test-gateway

  http:
  - route:
    - destination:
        host: oauth-proxy.test.svc.cluster.local
        port:
          number: 4180

# Istio Operator file

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: test-istio-operator
  namespace : test
spec:
  meshConfig:
    accessLogFile: /dev/stdout
    extensionProviders:
    - name: "oauth2-proxy"
      envoyExtAuthzHttp:
        service: "oauth-proxy.test.svc.cluster.local"
        port: "4180" # The default port used by oauth2-proxy.
        includeHeadersInCheck: ["authorization", "cookie","x-forwarded-access-token","x-forwarded-user","x-forwarded-email","x-forwarded-proto","proxy-authorization","user-agent","x-forwarded-host","from","x-forwarded-for","accept","x-auth-request-redirect"] # headers sent to the oauth2-proxy in the check request.
        headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token","x-forwarded-access-token"] # headers sent to backend application when request is allowed.
        headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied.

# OAuth2_proxy config file

apiVersion: v1
kind: Service
metadata:
  labels:
    app: oauth-proxy
  name: oauth-proxy
spec:
  type: NodePort
  selector:
    app: oauth-proxy
  ports:
  - name: http-oauthproxy
    port: 4180
    nodePort: 31023
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: oauth-proxy
  name: oauth-proxy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "oauth-proxy"
  template:
    metadata:
      labels:
        app: oauth-proxy
    spec:
      containers:
      - name: oauth-proxy
        image: "quay.io/oauth2-proxy/oauth2-proxy:v7.2.0"
        ports:
        - containerPort: 4180
        args:
          - --http-address=0.0.0.0:4180
          - --upstream="file://dev/null"
          - --set-xauthrequest=true
          - --pass-host-header=true
          - --pass-access-token=true
          

        volumeMounts:
        - name: keycloak-ca
          mountPath: "/etc/ssl/certs"
          # subPath: "/certs"

        env:
          # OIDC Config
          - name: "OAUTH2_PROXY_PROVIDER"
            value: "keycloak-oidc"
          - name: "OAUTH2_PROXY_OIDC_ISSUER_URL"
            value: "https://keycloak.localtest.me/realms/my-realm"
          - name: "OAUTH2_PROXY_CLIENT_ID"
            value: "oauth2-proxy"
          - name: "OAUTH2_PROXY_CLIENT_SECRET"
            value: "AtHekXAFXtbPGaSn8N31ihmJI5F3Lukx"
          # Cookie Config
          - name: "OAUTH2_PROXY_COOKIE_SECURE"
            value: "false"
          - name: "OAUTH2_PROXY_COOKIE_SECRET"
            value: "ZzBkN000Wm0pQkVkKUhzMk5YPntQRUw_ME1oMTZZTy0="
          - name: "OAUTH2_PROXY_COOKIE_DOMAINS"
            value: "*"
          # Proxy config
          - name: "OAUTH2_PROXY_EMAIL_DOMAINS"
            value: "*"
          - name: "OAUTH2_PROXY_WHITELIST_DOMAINS"
            value: "*"
          - name: "OAUTH2_PROXY_HTTP_ADDRESS"
            value: "0.0.0.0:4180"
          - name: "OAUTH2_PROXY_SET_XAUTHREQUEST"
            value: "true"
          - name: OAUTH2_PROXY_PASS_AUTHORIZATION_HEADER
            value: "true"
          - name: OAUTH2_PROXY_SSL_UPSTREAM_INSECURE_SKIP_VERIFY
            value: "true"
          - name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
            value: "true"
          - name: OAUTH2_PROXY_SET_AUTHORIZATION_HEADER
            value: "true"
      
      volumes:
      - name: keycloak-ca
        secret:
          secretName: auth-tls-secret

However as I mentioned before I get the error:

Get "https://keycloak.localtest.me/realms/my-realm/.well-known/openid-configuration": dial tcp 127.0.0.1:443: connect: connection refused

And If I change the issuer in OAuth2Proxy I get:

oidc: issuer did not match the issuer returned by provider, expected "https://keycloak.test.svc.cluster.local/realms/my-realm" got "https://keycloak.localtest.me/realms/my-realm"

That have sense of course.

Notice that if I enter in a browser https://keycloak.localtest.me/realms/my-realm/.well-known/openid-configuration works fine.

I think the Oauth pod is not connecting via ingress but I don't know how to solve it.

I don't know what is wrong, if you have any ideas I really appreciate your help.

Thanks for your time and sorry for this long question.

Nicolas
  • 55
  • 4
  • Did you try of override the keycloak frontend url with "https://keycloak.localtest.me/realms/my-realm" ? – dreamcrash Mar 24 '23 at 06:31
  • Hello @dreamcrash, no, I haven't tried, How can I do this?? – Nicolas Mar 24 '23 at 08:10
  • I believe there is also a JDBC connection url error from keycloak to postgresql. Try like this: jdbc:postgresql://keycloak-db-postgresql-ha-pgpool.NS-NAME.svc.cluster.local/postgres --> https://www.keycloak.org/server/containers (chapter “Relevant options”) – glv Mar 24 '23 at 09:20
  • Where NS-NAME is the name of the namespace where postgresql resides. – glv Mar 24 '23 at 09:21
  • You try in the KC admin console, on the realm settings there is something about frontendurl – dreamcrash Mar 24 '23 at 11:03

0 Answers0