4

I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.

What's the best way to setup ingress for a service that will be restricted to only people on the VPN?

Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.

ZECTBynmo
  • 3,197
  • 3
  • 25
  • 42
  • 1
    I Agree with the answers provided, but if you need a more detailed example please let us know which cloud provider are you using! – Will R.O.F. Apr 17 '20 at 12:33

2 Answers2

2

It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.

coderanger
  • 52,400
  • 4
  • 52
  • 75
1

Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what @coderanger said.

Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".

Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers

You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.

paulopontesm
  • 260
  • 1
  • 4