0

I am in a strange situation I cannot understand how to debug.

In an enterprise, I am given a company-managed Kubernetes cluster.

I created a load-balancer for this cluster so that it is accessible inside the company on the domain name https://mydomain.company.com

Then helm repo add kong https://charts.konghq.com and helm repo update and then fetched an untarred kong/kong in a directory name kong-helm and then ran the command

helm upgrade --install --set ingressController.installCRDs=false -n kong --create-namespace kong kong-helm/kong

Everything looks good. everything works. Everything works fine. for example, I have a deployment like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "43"
    meta.helm.sh/release-name: canary-mktplc-catalog
    meta.helm.sh/release-namespace: default
  labels:
    app: canary-mktplc-catalog
    app.kubernetes.io/managed-by: Helm
  name: canary-mktplc-catalog-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: canary-mktplc-catalog
  template:
    metadata:
      labels:
        app: canary-mktplc-catalog
      name: canary-mktplc-catalog
    spec:
      containers:
      - image: ******
        imagePullPolicy: Always
        name: canary-mktplc-catalog
        ports:
        - containerPort: 8000
          name: http
          protocol: TCP
      dnsPolicy: ClusterFirst

and a service like this

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: canary-mktplc-catalog
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2023-04-10T18:58:23Z"
  labels:
    app: canary-mktplc-catalog
    app.kubernetes.io/managed-by: Helm
  name: canary-mktplc-catalog-service
  namespace: default
spec:
  clusterIP: 10.109.168.243
  clusterIPs:
  - 10.109.168.243
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 8000
    protocol: TCP
    targetPort: http
  selector:
    app: canary-mktplc-catalog
  type: ClusterIP
status:
  loadBalancer: {}

and an ingress resource like this

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: canary-mktplc-catalog
    meta.helm.sh/release-namespace: default
  labels:
    app: canary-mktplc-catalog
    app.kubernetes.io/managed-by: Helm
  name: canary-mktplc-catalog-ingress
  namespace: default
spec:
  ingressClassName: kong
  rules:
  - host: mydomain.company.com
    http:
      paths:
      - backend:
          service:
            name: canary-mktplc-catalog-service
            port:
              name: http
        path: /api/mktplc-catalog
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 10.110.25.146

Using postman:

  • sending a get request to https://mydomain.company.com/api/mktplc-catalog returns "Hello World I received a get request"

  • sending a get request to http://mydomain.company.com/api/mktplc-catalog returns "Hello World I received a get request"

  • sending a post request to https://mydomain.company.com/api/mktplc-catalog returns "Hello World I received a post request"

But ....

  • sending a post request to http://mydomain.company.com/api/mktplc-catalog returns "Hello World I received a get request"

The last one is turned into a get request. It is very strange and do not know how to debug it.

Another observation is that these two both return the correct response

another observation is these two both return the correct response

  • curl --location --request POST 'http://mydomain.company.com/api/mktplc-catalog'

  • curl --location --request POST 'https://mydomain.company.com/api/mktplc-catalog'

I found a similar question here

Kubernetes NGINX Ingress redirecting post requests to GET

the same solution worked for me. I turn on this setting it will return the correct response

enter image description here

And the solution to this question did not work for me because I already have the host hardcoded

Amin Ba
  • 1,603
  • 1
  • 13
  • 38

0 Answers0