0

I am looking to resolve addresses in a private hosted zone network using customised forwarding rules configured in an on-prem DNS service. The forwarding rule would effectively say, "for my private domain xyz, forward queries to 10.1.1.2" where 10.1.1.2 is an AWS private IP address in a VPC corresponding to a resolver endpoint.

I looking to understand the differences between forwarding the queries to the standard .2 address in the VPC associated with the private zone, or setting up an inbound route 53 resolver endpoint to receive and resolve queries.

Apart from a difference in price, they both seem to do the same thing. I have confirmed using dig that I can use the .2 address to resolve private hosted zone records from outside the VPC (via transit gateway).

So technically, why would I want to use an inbound resolver endpoint, when I can resolve the queries more cheaply using the .2 address? What am I missing here?

I found some AWS doco that indicates .2 addresses are not usable outside the VPC, but I have confirmed this is incorrect.

shonky linux user
  • 1,163
  • 10
  • 15

2 Answers2

0

Turns out that AWS Transit Gateway service does not support DNS query resolution against ".2" resolvers across attached VPCs. You may see DNS queries working in some availability zone(s) in some region(s), as well as from on-premise but this feature is not supported on AWS Transit Gateway and is not a recommended configuration in terms of security. To implement Centralized DNS management using AWS Transit Gateway, please follow this blog post:

Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway

shonky linux user
  • 1,163
  • 10
  • 15
0

I found some AWS doco that indicates .2 addresses are not usable outside the VPC, but I have confirmed this is incorrect.

I would be re-thinking whether you truly are using the .2 DNS server from outside the VPC. TGW deploys ENIs into the VPC and so your 'outside VPC traffic' will in fact actually be coming from the ENI inside the VPC - hence this rule of VPC native traffic only does not get broken. A simple lab of 2 VPC's, with an EC2 in each and change 1 EC2 /etc/resolv.conf file to use the other VPC's DNS .2 address will very easily show that the VPC DNS server rejects queries with a source IP from outside its VPC CIDR. An inbound resolver and again changing /etc/resolv.conf to this new inbound resolver IP will fix the 'issue'.

tldr: VPC DNS Server only accepts requests from its own VPC CIDR range. Inbound resolver acts as reverse proxy and changes the request source IP to the resolvers own IP (hence the VPC DNS Server is happy to accept it) . TGW gives the illusion of outside VPC traffic but actually the traffic enters through ENI within the VPC.