1

I have a Windows Server 2012 R2 installation, with Hyper-V, RRAS and NAT setup.

I'm having an issue with two of my VM's and RRAS NAT:

Essentially, these two VM's will not communicate outside the local subnet. They can communicate with everything within the local network (other VM's, the gateway) but nothing outside of it. The only difference between these two and the other VM's are that they are Linux based systems. All other VM's are Windows Server 2012 R2 based systems.

Is there something specific to Linux that would be causing this issue? One is a CentOS install, and the other is a Debian install.

If I assign either of the linux boxes a publicly facing interface, then they have outside connectivity. Obviously I don't want to do this, as it wastes my IP addressing space.

Do note: the server can ping the Default Gateway (10.0.0.1) just fine. Time is usually around 0.450ms.


For now, I just want to solve the issue with the debian install, so here is a bit of the data from it:

# ifconfig
eth0    Link encap:Ethernet  HWaddr 00:15:5d:91:82:07
        inet addr:10.0.4.0  Bcast:10.0.255.255  Mask:255.255.0.0
        inet6 addr: fe80::215:5dff:fe91:8207/64 Scope:Link
        UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
        RX packets:1842 errors:0 dropped:0 overruns:0 frame:0
        TX packets:7245 errors:0 dropped:0 overruns:0 carrier:0
        collisions:0 txqueuelen:1000
        RX bytes:119697 (116.7 KiB)  TX bytes:701216 (684.7KiB)

(Loopback not included)

# route
Kernel IP routing table
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
default        10.0.0.1       0.0.0.0        UG    0      0        0 eth0
localness      *              255.255.0.0    U     0      0        0 eth0

# ip route
default via 10.0.0.1 dev eth0
10.0.0.0/16 dev eth0  proto kernel  scope link  src 10.0.4.0

The /etc/network/interfaces file is as follows:

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet static
        address 10.0.4.0
        netmask 255.255.0.0
        network 10.0.0.0
        broadcast 10.0.255.255
        gateway 10.0.0.1

Some more information:

# iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT

# iptables -L
Chain INPUT (policy ACCEPT)
target     port opt source           destination

Chain INPUT (policy ACCEPT)
target     port opt source           destination

Chain OUTPUT (policy ACCEPT)
target     port opt source           destination

The network is basically setup as follows: the Hyper-V Windows Server has one network port, which is plugged into the internet. The host then has RRAS and NAT installed and uses my entire public IP space to translate NAT for the VM's. Each VM get's a different /24 block of IP addresses from the 10.0.0.0/16 range, but they keep the /16 subnet mask. The Debian VM, for instance, is 10.0.4.0 - 10.0.4.255.

Der Kommissar
  • 1,153
  • 7
  • 9

1 Answers1

1

So, as it turns out the issue is due to checksum offloading.

For some reason, Linux guests don't work over NAT when IPv4 Checksum Offloading is activated. (They work without NAT just fine, which is bizarre.)

So, after disabling IPv4 Checksum Offload on both the physical and virtual interface, and rebooting the server, everything works as expected.

Der Kommissar
  • 1,153
  • 7
  • 9