0

Maybe someone can help with this case.. I will be very grateful.

I have 2 EKS Clusters (Staging and Production) both in different regions and different VPCs.

On both clusters, I have enabled EKS Public endpoints (Limited for specific IPs) and Private endpoints.

I want to have the access to the clusters from outside (just in case) And I want that the clusters can speak with each other through internal IPs. For this, I have configured VPC peering. Everything works fine regarding DNS for other AWS Services, for example, I can ping and resolve local ec2-instances names from booth VPCs.

If I try to ping EKS endpoint of the Staging cluster from Staging VPC: It returns me local EKS endpoint ip - everything is ok.

But if I try to ping EKS endpoint of the Production cluster from Staging VPC: It returns me public ip - and this is a problem (As I mentioned before DNS resolving is working because I can resolve for example ec2 instance dns name of Production VPC from Staging VPC)

Maybe it's something specific to EKS and I'm doing something wrong?

On the VPC I have enabled these parameters for DNS:

  enable_dns_support = "true"
  enable_dns_hostnames = "true"

VPC Peering is configured to allow dns resolution from both sides

  requester {
    allow_remote_vpc_dns_resolution = true
  }
  accepter {
    allow_remote_vpc_dns_resolution = true
  }

EKS is configured to allow public endpoint and private endpoint.

vpc_config {

endpoint_private_access = true
endpoint_public_access = true
public_access_cidrs = ["MY_IP_HERE"]

}





I did some tests regarding this and didn't find solution, I belive I probably doing something wrong.
I'm expecting that the cluster will have the ability to communicate with each other through internal network
Jack
  • 3
  • 3

0 Answers0