5

I have the following setup

2 x linode vps
1 x lab (physical) running 4 vps

My goal is to make it so all nodes act as if they are on the same LAN. This will allow me to alter IPTable rules, to allow only local traffic, versus having to add a new IPTable entry for EVERY server which needs access to a port on the target node.

I have done some preliminary research and testing and can't quite seem to figure out the best solution for what I am trying to accomplish. I have been practicing with two of my lab VPS, which reside on separate subnets, before I start configuring the actual production VPS.

The lab machine has two physical nics; eth0 and eth1. eth1 is setup as a bridge to provide virtual nics to the VPS.

Setup is as follows

service-a-1 (physical node):
    eth0: 192.168.0.1
    eth1: br0
    br0:  192.168.0.2

service-a-2 (vps):
    eth0: 192.168.0.3
    eth0:0 10.0.0.1, 255.255.192.0
    eth0:1 10.0.1.1, 255.255.192.0, gw 10.0.0.1

service-a-3 (vps):
    eth0: 192.168.0.4
    eth0:0 10.0.64.1, 255.255.192.0
    eth0:1 10.0.65.1, 255.255.192.0, gw 10.0.64.1

I use the 192.168.0.x ip addies to connect to the VPS, but the 10.0.x ip addies to practice connecting subnets. My goal with the above design is to establish a secure tunnel between service-a-2 and service-a-3 by way of their gateway ips; 10.0.0.1 and 10.0.64.1, respectively. Then for all other nodes in each subnet, use the gateways, for which a tunnel is already establish, so I don't have to keep creating a new tunnel for every node on either subnet.

To test connectivity I have been using: ping -I 10.0.1.1 10.0.65.1, which should emulate communication between node1 on subnet1 and node1 on subnet2.

I tried to follow the instructions outlined in this tutorial as it seemed pretty straight forward, but after reading other posts, not sure it's actually encrypted, as the mode is set to 'gre'. But after reading some information on using OpenSSH, it seems that a new connection is required for every node on the subnet, vs establishing a single connection between the two gateways.

After more searching around I came across an article provided by linode which looked promising but in the first few paragraphs mentioned that OpenSSH is the preferred method (over OpenVPN) to accomplish what I am seeking to do.

So my question is a two-parter:

  1. Is my logic valid for trying to connect subnets with one another? (Establish a tunnel between gateways, then assign gateway to each node on the subnet)

  2. What is the preferred method of establishing a tunnel between two gateways to be shared by X number of nodes within their respective subnets? Using linux route, OpenSSH, OpenVPN, or something else?

-- Update --

After some toying around, it seems I need to establish an Open-SSH tunnel (for encryption) between the disparate routers. The tunnel will connect the external ips of both routers, which I assume, if set up correctly, will allow me to access nodes behind the router on the other end.

Something else dawned on me, say I have the following setup:

subnet-1: Office #1, San Diego, CA

subnet-2: Colo #1, Dallas, TX

subnet-3: Colo #1, Tokyo, Japan

subnet-4: Colo #1, Sydney, Australia

Would it make sense to establish tunnels between each subnet, to act as a virtual lan? As I mentioned in the original question, I am doing this so IPTables can allow any traffic coming through 10.0.0.0/18, vs having to pinhole iptables for every server for which access is required from another server.

Taking even a further step back, does it even make sense to run IPTables on EVERY server, if it is behind a firewall? Maybe it would be easier just to stop IPTables on all servers behind a firewall. I take security seriously and it seems common sense to run IPTables on every node, even if behind a firewall. But if someone gains access to a node, then theoretically they can access other nodes as if they are not running IPTables, because of the 10.0.0.0/18 rule pinholed on every server.

-- Update #2 --

So I have n2n configured in the following manner:

service-a-1 (behind router, but pinholed 55554 udp):

  IP config: 
    ifcfg-eth0:  inet addr:10.0.0.1  Bcast:10.0.63.255  Mask:255.255.192.0 HWaddr 00:1B:78:BB:91:5A

  n2n (edge) startup:
    edge -d n2n0 -c comm1 -k eme -u 99 -g 99 -m 00:1B:78:BB:91:5C -a 10.0.0.1 -l supernode1.example.com:55555 -p 55554 -s 255.255.192.0

service-a-3 (linode vps):

  IP config:
    ifcfg-eth0: inet addr:4.2.2.2  Bcast:4.2.127.255  Mask:255.255.255.0 HWaddr F2:3C:91:DF:D4:08

    ifcfg-eth0:0: inet addr:10.0.64.1  Bcast:10.0.127.255  Mask:255.255.192.0 HWaddr F2:3C:91:DF:D4:08

    n2n (server) startup:
     supernode -l 55555 -v

    n2n (edge) startup:
      edge -d n2n0 -c comm1 -k eme -u 99 -g 99 -m F2:3C:91:DF:D4:08 -a 10.0.64.1 -l supernode1.example.com:55555 -p 55554 -s 255.255.192.0

With this setup, I was fully expecting to ping service-a-3 (10.0.64.1) from service-a-1 (10.0.0.1) but I keep getting "destination net unreachable". IPTables on both servers is turned off, but service-a-1 is behind a firewall, but it is configured to allow ALL outbound traffic. Any idea why I can't ping between the two subnets as if it were a flat network?

Mike Purcell
  • 1,708
  • 7
  • 32
  • 54
  • I'd use VPN rather than a SSH tunnel for this sort of setup. VPNs will be more complicated to setup, but IMO in the end, will provide what is essentially a big LAN, while SSH will allow remote connectivity that is encrypted, but the devices will still have their own separate networks. – Lawrence Aug 29 '13 at 02:58

2 Answers2

5

You can simplify the solution...

If you're looking for a way to link all of these servers (not routers or gateways devices) as though they were on one flat network, I'd suggest looking at the n2n peer-to-peer offering from ntop.

This tool allows you to traverse intermediate devices; helpful if you don't have access to firewalls or have complex routing issues. In my case, I use n2n for monitoring client systems from a central location. It's cleaner than site-to-site VPNs, and I can work around overlapping subnets/IP addresses. Think about it...

Edit:

I recommend using the n2n_v2 fork and hand-compiling.

An example configuration of n2n would look like the following:

On your supernode, you need to pick a UDP port that will be allowed through the firewall in front of the supernode system. Let's say UDP port 7655, with name edge.mdmarra.net:

# supernode -l 7655 -f -v 
# edge -d tun0 -m CE:84:4A:A7:A3:40 -c mdmarra -k key -a 10.254.10.1 -l edge.mdmarra.net:7655

On the client systems, you have plenty of options. You should choose a tunnel device name, a MAC address (maybe), a community name, a key/secret an IP address and the address:port of the supernode. I tend to use a more complete command string:

# edge -d tun0 -m CE:84:4A:A7:A3:52 -c mdmarra -k key -a 10.254.10.10 -l edge.mdmarra.net:7655

These can be run in the foreground for testing, but all of the functionality is in the edge command. I will typically wrap this in the Monit framework to make sure the processes stay up.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Looks interesting. Couldn't find how long the project as been developed. Also, according to the downloads page, there is no stable version, only a dev version? – Mike Purcell Aug 29 '13 at 03:24
  • I use CentOS/RHEL - It's available via yum - See: http://pkgs.org/download/n2n – ewwhite Aug 29 '13 at 03:29
  • I use CentOS as well. It looks like n2n is not available via default repo, but it is available using epel repo. I'm still trying to figure out whether n2n is considered stable or not? Do you use it in an enterprise environment? – Mike Purcell Aug 29 '13 at 04:17
  • @MikePurcell Yes, I do...For the reasons listed in my answer. – ewwhite Aug 29 '13 at 04:19
  • Can you post a sample of your existing configuration? I am still trying to get my head around how to setup the supernodes. If I have the above 4 subnets, where should I place the supernodes? – Mike Purcell Aug 29 '13 at 20:56
  • [There really isn't much to it.](http://wiki.cementhorizon.com/display/CH/HOWTO+Install+n2n+supernode+under+CentOS) Download the package and run through the man pages. Make the most stable server or the main location the supernode. – ewwhite Aug 29 '13 at 22:09
  • Not really a strong response, as I have poured over documents related to n2n since you posted your answer. I'm still trying to figure out how the supernodes fit within the overall architecture of the network. Basically if I have subnet1 and subnet2 and no tunnel is established between the respective routers, adding a supernode to subnet1 will allow subnet2 to access nodes on subnet1? Or is it that as long as a node on subnet1 has access to supernode, as does a node on subnet2, then both nodes can communicate, even with no actual tunnel between the subnets? – Mike Purcell Aug 29 '13 at 23:30
  • I only use a single supernode; my main monitoring node. That's the only device that requires inbound/outbound access through the firewall (e.g. a public address). As long as the edge nodes can see it, the architecture is available. – ewwhite Aug 29 '13 at 23:38
  • @MikePurcell I'll post a supernode config later if you need... but you'll be disappointed. It's very minimal. The edge node command string looks like [**this**](http://pastebin.com/ZDDEDVjw). – ewwhite Aug 30 '13 at 07:59
  • Thanks for the edge node config example, I noticed you have a `-d edge87` switch, I assume edge87 is the name of the tunnel you created when following the steps outlined via the n2n git readme file? https://github.com/lukablurr/n2n_v2_fork – Mike Purcell Sep 03 '13 at 17:58
  • That is the name of the tunnel connection. – ewwhite Sep 03 '13 at 18:03
  • Right, I figured that, the problem I have is the docs posted in previous comment show the user creating a tunnel `tun0`, but then reference `-d n2n0`, is this a typo, shouldn't it read: `-d tun0`? – Mike Purcell Sep 03 '13 at 21:19
  • You can use the device name of your choice. tun0 is popular, and makes sense for some environments. I used edgeXX, but that's the setup I inherited. – ewwhite Sep 04 '13 at 01:12
  • Added an update to OP. – Mike Purcell Sep 04 '13 at 16:35
  • @MikePurcell Are you sure you have UDP open on the supernode's router? – ewwhite Sep 04 '13 at 20:43
  • The supernode is not behind a router, rather it is running iptables, and I disabled iptables for testing. Supernode is accessible to all nodes. – Mike Purcell Sep 04 '13 at 20:49
  • @MikePurcell I went back and checked versions. The build available via yum is not pretty. I see the issue you're talking about. Try downloading the [**n2n_v2 fork**](https://github.com/lukablurr/n2n_v2_fork/), compiling it and running. I used a line like `supernode -v -l 7655 -f` on the supernode and: `edge -d tun0 -m CE:84:4A:A7:A3:52 -c mdmarra -k Das7PB55J5 -a 10.254.10.10 -l edge.mdmarra.net:7655 -f` on the client. Works well and the pings/connectivity are consistent. – ewwhite Sep 05 '13 at 02:48
  • @MikePurcell Answer updated! – ewwhite Sep 05 '13 at 02:58
  • Is that the same version as available via epel? The version I tried to work with on this project is 2.1.0-1.el6 – Mike Purcell Sep 05 '13 at 03:45
  • I cloned the n2n_v2_fork version, and followed the rpm build instructions given via the INSTALL file, and it appears the version is 2.1.0-1, which is the same version available via epel repo. You mentioned that you setup a supernode and a single edge node, as I mentioned in my latest update I setup supernode and edge node on one server (diff ports: 55555, 55554) and edge node on another subnet. But was unable to get the two to ping each other. All while iptables was off, and router on subnet1 was pinholed to allow incoming udp traffic to 55554. – Mike Purcell Sep 05 '13 at 03:57
  • Sorry, on the supernode, you'll still need to run an edge as well. You don't need to specify a port for the edge running on the supernode. The edge statement running on the supernode will contain the IP address you need. You don't need the aliased IP (eth0:0) you created. – ewwhite Sep 05 '13 at 11:34
  • The eth0:0 alias allows for the server to be on the 10.0.0.0/18 network as the primary eth0 config is assigned a static ip via linode. – Mike Purcell Sep 05 '13 at 17:04
  • You should use a different IP space for the n2n edge connections. It should not overlap with your real interface IP addresses. – ewwhite Sep 06 '13 at 00:17
  • Why? It makes more sense to me to have each edge node register the internal, assigned ip address with the supernode, not have an actual ip address, then a different ip address just for the n2n to work. – Mike Purcell Sep 06 '13 at 00:24
  • For routing purposes. Otherwise, how would the traffic know which way to go? – ewwhite Sep 06 '13 at 00:27
  • In the example you gave you have `-a 10.254.10.1`, the 10.x is a class A non-routable range correct? I assumed that the supernode was able to store the gateway information so it knows how to reach that node. – Mike Purcell Sep 06 '13 at 05:20
  • The private IPs of the servers involved are completely different. I use this to connect to servers on customer networks; many of whom either have overlapping IP ranges or multiple levels of NAT. That's why n2n has its own IP scheme separate from those. – ewwhite Sep 06 '13 at 10:37
0

You could set up a GRE tunnel and see if it fits your needs. The general idea (very general) is close to that of the VPN solutions, only without all the security overhead. This is based on the assumption on my part that you do not need or want security.

If later you decide to add security to the link, you can do so. It is relatively easy to iplement PPTP-over-GRE and even IPSEC-over-GRE.

Although GRE is a technology developed by CISCO, it is by no means proprietary. Many Linux distributions have the necessary tools for setting up a GRE tunnel.

You can check this brief write-up about PPTP-over-GRE as it is implemented in Arch Linux, the distribution I use for most of the servers I set up.

dlyk1988
  • 1,674
  • 4
  • 24
  • 36