1

We currently have a multi-tenant backend laravel application set up, with pusher websockets enabled on the same app. This application is built into a docker image and hosted on Digital Ocean container registry, and deployed via HELM to our Kubernetes Cluster.

We also have a front end application built in angular that tries to connect to the backend app via port 80 on the /ws/ path to establish a websocket connection.

When we try to access the tenant1.example.com/ws/ we get a 502 gateway error, which suggests the ports arent mapping correctly? but tenant1.example.com port 80 works just fine.

Our heml chart yaml is as follows:

NAME: tenant1
LAST DEPLOYED: Fri Dec 11 14:34:00 2020
NAMESPACE: tenants
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
subdomain: tenant1

COMPUTED VALUES:
affinity: {}
autoscaling:
  enabled: true
  maxReplicas: 1
  minReplicas: 1
  targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
  pullPolicy: IfNotPresent
  repository: nginx
  tag: ""
imagePullSecrets: []
ingress:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
  enabled: true
  hosts:
  - host: example.com
    pathType: Prefix
  tls: []
migrate:
  enabled: true
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources:
  requests:
    cpu: 10m
rootDB: public
securityContext: {}
service:
  port: 80
  type: ClusterIP
serviceAccount:
  annotations: {}
  create: true
  name: ""
setup:
  enabled: true
subdomain: tenant1
tolerations: []


---
# Source: backend-api/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tenant1-backend-api
  labels:
    helm.sh/chart: backend-api-0.1.0
    app: backend-api
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: tenant1-backend-api-service
  namespace: tenants
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
      name: 'http'
  selector:
    app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: tenant1-backend-api-ws-service
  namespace: tenants
spec:
  type: ClusterIP
  ports:
    - port: 6001
      targetPort: 6001
      name: 'websocket'
  selector:
    app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tenant1-backend-api-deployment
  namespace: tenants
  labels:
    helm.sh/chart: backend-api-0.1.0
    app: backend-api
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  selector:
    matchLabels:
      app: tenant1-backend-api-deployment
  template:
    metadata:
      labels:
        app: tenant1-backend-api-deployment
        namespace: tenants
    spec:
      containers:
      - name: backend-api
        image: "registry.digitalocean.com/rock/backend-api:latest"
        imagePullPolicy: Always
        ports:
          - containerPort: 80
          - containerPort: 6001
        resources:
            requests:
              cpu: 10m
        env:
          - name: CONTAINER_ROLE
            value: "backend-api"
          - name: DB_CONNECTION
            value: "pgsql"
          - name: DB_DATABASE
            value: tenant1
          - name: DB_HOST
            valueFrom:
              secretKeyRef:
                name: postgresql-database-creds
                key: DB_HOST
          - name: DB_PORT
            valueFrom:
              secretKeyRef:
                name: postgresql-database-creds
                key: DB_PORT
          - name: DB_USERNAME
            valueFrom:
              secretKeyRef:
                name: postgresql-database-creds
                key: DB_USERNAME
          - name: DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: postgresql-database-creds
                key: DB_PASSWORD
---
# Source: backend-api/templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: tenant1-backend-api-hpa
  namespace: tenants
  labels:
    helm.sh/chart: backend-api-0.1.0
    app: backend-api
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: tenant1-backend-api-deployment
  minReplicas: 1
  maxReplicas: 1
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 80
---
# Source: backend-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tenant1-backend-api-ingress
  namespace: tenants
  labels:
    helm.sh/chart: backend-api-0.1.0
    app: backend-api
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: tenant1.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: tenant1-backend-api-service
                port:
                  number: 80
          - path: /ws/
            pathType: Prefix
            backend:
              service:
                name: tenant1-backend-api-ws-service
                port:
                  number: 6001

bcp-kcor1
  • 11
  • 2
  • Have you tried to bypass ingress and try to reach directly your app? Does it work without it? – acid_fuji Dec 14 '20 at 10:19
  • **Latest update** I have used a load balancer on the service and everything is working as expected. And I was able to use tcpdump to listen for incoming connections via: "tcpdump port 6001 and '(tcp-syn|tcp-ack)!=0'" I believe the trouble I am having is that the port 6001 isn't opening on the ingress to forward to the pod. ie tenant1.example.com:6001 is not working through the nginx ingress, but via a loadbalancer directly it is. – bcp-kcor1 Dec 14 '20 at 12:15
  • Just to confirm before further checks. Is your `rewrite-target` to: `/` intentional? Because if your application expects `/ws/` this won`t work. – acid_fuji Dec 14 '20 at 15:33

0 Answers0