-1

I am trying to host a WebApi on K8s using deployments and service. I want my API to be publicly accessible from the public net using the Public IP of the node. I have this cluster on a Cloud provider's VM. Like bare metal but on cloud VMs. I have used Kubeadm to bootstrap this cluster.

I am able to access the API from inside the vnet (virtual network) using node port type. My API listens on 80.

Here are the YAML snippets:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: <api>-deployment
  labels:
    app: <api>
spec:
  replicas: 1
  selector:
    matchLabels:
      app: <api>
  template:
    metadata:
      labels:
        app: <api>
    spec:
      containers:
      - name: <api>
        image: <api_image>
        ports:
        - containerPort: 80
apiVersion: v1
kind: Service
metadata:
  name: <api>
  labels:
    app: <api>
spec:
  type: NodePort
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30000
  selector:
    app: <api>

The issue is that I can only access my API from my local vnet ie using the private IP address from other VMs inside the vnet. (I have already whitelisted port 30000 from my network security group so that it allows)

I hosted the same solution on IIS and opened up ports in my VM, configured security group rules on my cloud provider and and then pinged the public IP (public IP mentioned on the cloud provider's portal) of that VM and it worked without a problem. I am trying to do the same here.

Should I be using NodePort ? Is there any other way that is quick and simple ? I just want my API to publicly accessible from public net i.e. accessible with one singular IP address (logically the node’s public IP address that is displayed on my cloud provider’s portal of the VM) without using extra cloud provider related Load Balancers.

I know Ingree exists but how does this solve my problem here ? My API is already accessible, just not outside the vnet. What exactly is going wrong here ?

(Even though I want expose the API to public, it'll mainly be used by other systems not humans)

Edit: here are the screenshots

kubectl get pods

kubectl get svc

kubctl get svc -o wide

Thanks

old_timer
  • 69,149
  • 8
  • 89
  • 168

1 Answers1

0

If your VM is inside a private subnet (i.e. no public IP assigned to it) and you want your API to be publicly accessible, then you need to have a VM (with a public IP) that acts as a reverse proxy/jump box/entry point to your cluster

If you want a quick solution, you can just deploy some reverse proxy server like Apache or Nginx on the new VM which will forward your request to the <api-service>:<NortPort>, that way you will be able to access your API using the new jump box

Although this works but not the most scalable option, so like you mentioned you need to have an Ingress Controller which would also need an entry point/ Load Balancer for your cluster. More details are here in my SO answer

Read this for bare-metal considerations

Sibtain
  • 1,436
  • 21
  • 39
  • All my VMs including my nodes have a public IPs associated with them. That's how I can SSH into them. So they are available to public net (or am I missing something?). Though thanks for the answer I'll check your other answer. – Adil Abdul Rahman Feb 16 '23 at 13:07
  • In that case you should be able to access your API from _any_ of the nodes in the cluster using the NodePort, if not then there should an issue with the firewall configuration of the VMs that might be blocking the port. Also, you should update your question with the outputs of `kubectl get svc` and `kubectl get pod` commands. – Sibtain Feb 16 '23 at 13:28
  • Yes, I was able to access my node from every node and every other VMs in the subnet (Using the private IP address of the the VMs I found on my cloud provider's portal or the `hostname -I` command). I have updated the question to include the screenshot. – Adil Abdul Rahman Feb 16 '23 at 14:02
  • Based on the screenshot your Service (2nd one) is running as a `ClusterIP` not `NodePort`. Are you sure this is the namespace where you exposed your API as a `NodePort Service` – Sibtain Feb 16 '23 at 14:05
  • Sorry, I was messing around with Cluster IP. But before this I had configured my svc of type NodePort. This allowed me to access my API from within the vnet outside the cluster(there are other VMs in the vnet that are not in cluster). But I want my API to available to the outside world. – Adil Abdul Rahman Feb 16 '23 at 14:11
  • Please update to `NodePort` and also share how (& from where) you are trying access the API and what are the results of it – Sibtain Feb 16 '23 at 14:15
  • Hey, thanks a lot for your patience. I found the underlying cause of the issue. It **was** accessible from the public net the whole time but my separate work VM that I was working on had that port likely blacklisted which is why I wasn't able to access it. – Adil Abdul Rahman Feb 16 '23 at 16:22
  • great! good to hear – Sibtain Feb 16 '23 at 16:30