-1

This is similar to the issue addressed at OpenVPN and routing .

My server runs both a racoon-based L2TP/PPTP VPN service and an OpenVPN service. The racoon service assigns addresses in the 10.0.77.0/24 range (on the en1 interface), and the OpenVPN service assigns client addresses in the 10.0.88.0/24 range (on the utun0 interface). Clients connect to both services from the public internet via public IP on the en0 interface, and are NATted back to the public internet on another public IP on the same interface (en0:0).

See my pf ruleset below. I'm 99.9% positive that this is a problem with my pf rules.

With the "nat on en0..." rule in effect, all VPN clients can access the internet properly. The racoon clients can also access other services on my server's other IPs, but OpenVPN clients can only access those services via the 10.0.88.1 address. Those clients can ping and traceroute to the other IPs, but not access any services on them. When a ping is running, tcpdump shows the ping happening on the utun0 interface, but it doesn't show the ping if I monitor the en0 interface (which is assigned the IP address being pinged).

If I disable the "nat on en0" rule, obviously no clients can connect to the internet, but all clients can connect to the server's other IPs. Something about the nat rule, and how OpenVPN handles its tunnelling, and how pf is filtering things, is messing up local interface access...but apparently I'm not smart enough with pf to figure it out.

Here's the pf config. Can anyone spot the problem?

set block-policy drop  
set fingerprints "/etc/pf.os"  
scrub-anchor "/*" all fragment reassemble  
nat-anchor "/*" all
rdr-anchor "/*" all
anchor "/*" all
dummynet-anchor "/*" all
table <vpn-nets> persist { 10.0.77.0/24 10.0.88.0/24 }
nat-anchor "/*" all
rdr-anchor "/*" all
pass quick on lo0 all flags S/SA keep state
anchor "/*" all
anchor "/*" all
anchor "/*" all
anchor "/*" all
nat on en0 from ! (en0) to any -> (en0:0)
table <__automatic_0> const { 127.0.0.1 10.0.88.1 10.0.77.1 }
pass inet6 from ::1 to any flags S/SA keep state
pass on lo0 inet6 from fe80::1 to any flags S/SA keep state
pass on en1 inet6 from fe80::223:dfff:fede:f372 to any flags S/SA keep state
pass inet from <__automatic_0> to any flags S/SA keep state
pass from <vpn-nets> to any flags S/SA keep state
pass on utun0 all flags S/SA keep state
pass on en1 all flags S/SA keep state
pass in on utun0 all keep state fragment
pass out on en0 from any to <vpn-nets> flags S/SA keep state
table <blockedHosts> persist file "/var/db/af/blockedHosts"
block drop in quick from <blockedHosts> to any
pass quick on lo0 all flags S/SA keep state
pass in log all flags S/SA keep state
pass out log all flags S/SA keep state
JLG
  • 21
  • 5
  • 1
    Come on, give us something to work with here.. So what does the route table look like on the client? What does a traceroute show. Have you done a tcpdump on the OpenVPN interface? Do you see the client attempting to contact the other addresses over the VPN link? – Zoredache Oct 29 '14 at 16:36
  • I've updated the question. Tell me what other info I need to give. – JLG Nov 02 '14 at 04:55

1 Answers1

0

Turns out I just needed a "rdr" rule right after the nat declaration, to redirect anything coming from an OpenVPN client destined for the server's public IP to the OpenVPN virtual gateway:

nat on en0 from ! (en0) to any -> (en0:0)
rdr pass on utun0 inet proto { tcp udp } from 10.0.88.0/24 to en0 -> 10.0.88.1

Apparently racoon does this on its own (?) but OpenVPN does not. I still can't figure why pings work without the rule but not tcp/udp--but that's why I only redirect tcp/udp with the rule. This pf one-liner was MUCH easier than screwing with split-horizon DNS.

JLG
  • 21
  • 5