I trying to setup Kubernetes cluster using kops, having all of my nodes and master running on a private shabnets on my existing AWS VPC, when passing the vpcid and network cidr to the create command, i'm enforced to have the EnableDNSHostnames=true, I wonder of it's possible to setup a cluster with that option set to false So all of the instances lunched in the private vpc wont have public address Thanks
Is it possible to run kubernetes in a shared AWS VPC private network, without dns hostnames enabled?
Asked
Active
Viewed 386 times
0
-
`EnableDNSHostnames=true` **does not** determine whether instances launched into VPC will have public IP addresses. That isn't what this option controls. – Michael - sqlbot Dec 19 '16 at 12:15
-
You can fork kops and change that option in the code. Your cluster should work though a few tools will fail, see https://github.com/kubernetes/kops/issues/399. I believe this might be better in Kubernetes 1.5. – Pixel Elephant Dec 19 '16 at 16:05
-
Thanks @PixelElephant, Is that issue is related to kops, or to kubernetes in general ? (like issues like kubectl exec or kubectl logs don't work) also what is the impact of setting up a cluster with associate-public-ip on master and nodes set to False Thanks – Ofer Velich Dec 19 '16 at 23:11
1 Answers
0
It's completely possible to run in private subnets, that's how I deploy my cluster (https://github.com/upmc-enterprises/kubernetes-on-aws), where all servers are in private subnets and access is granted via bastion boxes.
For kops specifically, looks like there's support (https://github.com/kubernetes/kops/issues/428), but I'm not a big user of it so can't speak 100% to how well it works.

Steve Sloka
- 3,444
- 2
- 22
- 29