0

I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?

flask yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy23-api-deployment
  labels:
    app: proxy23-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: proxy23-api
  template:
    metadata:
      labels:
        app: proxy23-api
    spec:
      containers:
        - name: proxy23-api
          image: my_image
          ports:
            - containerPort: 5000
          env:
            - name: DB_URI
              value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
            - name: DB_NAME
              value: db
            - name: PORT
              value: "5000"
      imagePullSecrets:
        - name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
  name: proxy23-api-service
spec:
  selector:
    app: proxy23-api
  type: NodePort
  ports:
    - port: 9002
      targetPort: 5000
      nodePort: 30002

mongodb yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy23-db-deployment
  labels:
    app: proxy23-db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: proxy23-db
  template:
    metadata:
      labels:
        app: proxy23-db
    spec:
      containers:
        - name: proxy23-db
          image: mongo:bionic
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: proxy23-storage
              mountPath: /data/db
      volumes:
        - name: proxy23-storage
          persistentVolumeClaim:
            claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: proxy23-db-service
spec:
  selector:
    app: proxy23-db
  type: NodePort
  ports:
    - port: 27017
      targetPort: 27017
      nodePort: 30003
Jonas
  • 121,568
  • 97
  • 310
  • 388
LonelyDaoist
  • 665
  • 8
  • 22
  • Can you share yaml of mongodb and flask app deployment? – Ali Jul 30 '21 at 11:20
  • Your MongoDB Service should have its own VIP. If internal DNS resolution somehow does not work, you can get your that Services ClusterIP (kubectl get svc), and use it instead of a DNS name. kube-proxy is meant to update firewalling on Kubernetes nodes, so whenever you restart your MongoDB pod, it should re-route connections to your new Pod IP. – SYN Jul 30 '21 at 11:20
  • @Ali I included the yaml files in the question – LonelyDaoist Jul 30 '21 at 11:40
  • @SYN but If I recreate the mongodb service its cluster ip would change aswell – LonelyDaoist Jul 30 '21 at 11:45
  • Usually if you redeploy the Service its cluster-internal IP address won't change. But the real answer should be to use the `proxy23-db-service.namespace.svc.cluster.local` host name; what doesn't work if you try it? – David Maze Jul 30 '21 at 13:46
  • You can add livenessprobe to the flask pod using the same service_host variable. It will restart the pod if the service is not reachable. Also, Iike other suggested above, i would recreate the service as clusterip and use the service name to connect to db from flask. – San Jul 30 '21 at 15:38
  • @DavidMaze if you simply kubectl apply the cluster ip won't change, but what if I delete the service and recreate it, in that case it will. DNS discovery doesn't work, for some reason it timeout, that's why I went with env vars instead – LonelyDaoist Jul 31 '21 at 12:32
  • Concentrate on fixing the DNS issue (which is probably a cluster-administration issue, not a programming problem) rather than working around it. The Service DNS name will be stable even if you delete and reapply it (it's under your control) and you shouldn't have to restart downstream services (assuming they use DNS correctly). – David Maze Jul 31 '21 at 13:06
  • @WytrzymalyWiktor no none so far – LonelyDaoist Aug 09 '21 at 09:06
  • Hello @LonelyDaoist. Have you tried David's suggestions? Try to debug DNS like described [here](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/). – Wytrzymały Wiktor Aug 12 '21 at 08:26
  • Hi @WytrzymalyWiktor I tried that link but I haven't figured it out yet, nslookup just times out – LonelyDaoist Aug 13 '21 at 13:49
  • Have you managed to make it work ? Could you please follow all the steps in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) and report the results. – matt_j Aug 23 '21 at 09:11
  • Hi @matt_j, it still doesn't work, as I said I already tried that link, nslookup keeps timing out – LonelyDaoist Aug 23 '21 at 11:01

0 Answers0