0

I encountered this issue today with the following setup:

[Cloud Server] --> [Cloud Gateway Server] <- WireGuard -> [Home Router] <-- [Home Device]
  • I have source NAT on my router. My home devices can SSH into my cloud server.
  • I have no NAT on my cloud gateway server. The routing table on the gateway is correct, and iptables is set to forward packets.

I noticed that, from cloud servers behind the gateway, I can ping my home devices, but no SSH connection can be established. After I set up source NAT on the cloud gateway, for packets forwarded from the cloud subnet, everything works fine.

My question is, why is NAT required on the cloud gateway? Does that have anything to do with the fact that packets from home devices are NATed? Or is there some fundamental limitation on the fact that cloud subnet, home subnet, and WireGuard subnet share different IP ranges?

I did some simulation of how the packets are processed and I cannot find out the issue. I have to admit that my understanding of networking is mostly from Wikipedia and Googling around. I am looking for some theoretical explanation, if possible. Thanks in advance!

Update:

To better describe the scenario, I have the following devices:

  • Server in the cloud, with an IP 172.16.0.2.
  • WireGuard gateway server in the cloud, with cloud IP 172.16.0.3, and WireGuard IP 192.168.0.1.
  • Office router, IP 10.0.0.1 and WireGuard IP 192.168.0.2.
  • Office device, IP 10.0.0.2.

There is an additional cloud gateway 172.16.0.1 that I have no access to. I have set route for both 192.168.0.0/24 and 10.0.0.0/24 with next hop to be 172.16.0.3.

Input and output chains are empty with default policy ACCEPT on both 172.16.0.2 and 10.0.0.2. I believe their forward chains do not matter, but they are set to use ACCEPT as default.

In my original configuration, 172.16.0.3 (WireGuard in Cloud) has route for 10.0.0.0/24 via the WireGuard network device. Similarly, 10.0.0.1 has route for 172.16.0.0/24 via its WireGuard network device. Their forward chains use ACCEPT and are empty other than configurations for Docker.

Lastly, 10.0.0.1/192.168.0.2 has a NAT rule for 10.0.0.0/16:

-A POSTROUTING -s 10.0.0.0/24 -j MASQUERADE

What I can do:

  • 172.16.0.3/192.168.0.1 <=> 10.0.0.0/16, ping, ssh, http.
  • 172.16.0.2 <= 10.0.0.0/16, single direction, ping, ssh, http.
  • 172.16.0.2 => 10.0.0.1/192.168.0.2, ping, ssh, http.
  • 172.16.0.2 => 10.0.0.2, ping only.

After I set a NAT rule on 172.16.0.3/192.168.0.1:

-A POSTROUTING -s 172.16.0.0/24 -j MASQUERADE

I have 172.16.0.2 => 10.0.0.2 to work with ssh/http.

The routing table on the cloud gateway looks like:

default via 172.16.0.1 dev eth0 proto dhcp src 172.16.0.3 metric 100 
10.0.0.0/16 via 192.168.0.2 dev wg-xnzg proto static onlink 
172.16.0.1 dev eth0 proto dhcp scope link src 172.16.0.3 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
Minsheng Liu
  • 121
  • 5
  • 2
    Why did you have a conclusion NAT is required? Please show us direct symptoms, not just your interpretation of them. It is probably a firewall somewhere is blocking direct packets and permitting NATed. WireGuard itself doesn't require NAT. – Nikita Kipriyanov Feb 24 '21 at 10:24
  • @NikitaKipriyanov Thanks for your input. However, I did double check to make sure that there are no special rules in the forward chains. I have contacted with the cloud provider to make sure that they didn't filter packets with unknown IPs, either. I have added a bit more info in the update, describing the phenomenon in a more clear way. Do you mind taking a look? Thanks! – Minsheng Liu Feb 24 '21 at 13:27
  • I suspect you are still having some routing or filtering issue. NAT circumvents this issue by changing addresses, so packets change into permitted range. So, which are routing tables on the server and on the office router, in the "original" configuration? I suggest you just to replace your word-description "how it was" into similar dumps of routing tables. Please also display how firewall filtering on both routers is configured (`iptables-save`). Also, did you try to capture traffic on routers and a server with `tcpdump` to see if it appears where expected? – Nikita Kipriyanov Feb 24 '21 at 14:28

0 Answers0