1

I have 7 dedicated servers running at a host provider.

Whenever I order a new server I cannot rely on having my servers being setup in the same rack, so I cannot have a physical private network between them. Instead I currently setup SSH tunnels between them with autossh to make e.g. the webservers communicate with databases etc.

I'm provisioning the servers using ansible but as the number of servers grow it's becoming frustrating having to deal with the tunnels and port numbers and autossh has some issues whenever the network connections have been interrupted. I would much rather just have them all in the same network. I'm thinking about switching to a VPN based solution linke TINC instead. However I'm unsure if it would add too much overhead on the network connections? The servers have Gbit network connections, and currently the use a maximum of 300 mb/sec when they peak. Ping times though the tunnels are around 4ms.

Is using a VPN like tinc a good alternative to the SSH tunnels or is there another better option overlooking?

PS. these are the servers I use: https://www.hetzner.de/dedicated-rootserver/ax60-ssd and https://www.hetzner.de/dedicated-rootserver/dell/dx291

Niels Kristian
  • 358
  • 1
  • 3
  • 13
  • 1
    What are you trying to accomplish? Secure private communications between the servers? Why not use IPsec transport encryption? – MikeyB Oct 30 '17 at 15:35
  • Yes, secure connections between the servers, so e.g. my web apps (running on 3 servers) can communicate with my postgresql primary database (running on another server) – Niels Kristian Oct 30 '17 at 15:38
  • Are you leasing full dedicated hosts or VMs? Most dedicated hosting providers these days offer 2nd private interfaces. If they are VMs, maybe you could ask your host to add a 2nd vnic for the same purpose. Either case, that's where I'd start. – Confusias Oct 30 '17 at 15:38
  • @Confusias Leasing full ones. But if I want to have the servers connected with a second network adaptor to a private gigabit switch, then they need to be in the same rack, and that is not possible with the different kinds of servers I use – Niels Kristian Oct 30 '17 at 15:43
  • The dedicated hosts I've worked for and used did not require the servers to be in the same rack for private network access. I highly recommend your look into this further. – Confusias Oct 30 '17 at 15:45
  • @Confusias I don't think Hetzner.de offers anything like that unfortunately :-( – Niels Kristian Oct 30 '17 at 15:47
  • I also looked into having multiple servers in a VLAN at Hetzner. At the time I wanted to easily migrate VMs to different servers. It wasn't possible or easy (don't quite remember). But, you can just use OpenVPN or something using IPSec. If SSH tunnels (over TCP, with an extra layer of flow-control) works somewhat OK, a good VPN solution will be better. – Halfgaar Oct 30 '17 at 15:55
  • Wow, that sucks @NielsKristian I'd recommend migrating to a provider that does support fully private interfaces. Doing so would certainly simplify the operation/security of your back-end traffic. In fact, most providers do not charge for private interface bandwidth traffic as it never reaches the public internet. May save you some bandwidth fees as well... – Confusias Oct 30 '17 at 16:00
  • 1
    Maybe at https://networkengineering.stackexchange.com/ there would be additional expertise. (but I just don't know for now how to follow good practices when posting on multiple stackexchange sites.) – A.B Oct 30 '17 at 17:47
  • @MikeyB What is IPsec and how does it differ from the others? Do you need to setup a tunnel for each port or is it putting the servers in a private network or how does it work? – Niels Kristian Oct 30 '17 at 18:22

1 Answers1

2

Based on the the question criteria and comments, VPN is your best bet, though you will incur some overhead for the encryption.

Personally I still recommend migrating to a dedicated host that offers a private interface back-end.

Confusias
  • 148
  • 11