0

I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router.

Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows:

  • software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud".
  • any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained.
  • finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers...

So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

[UPDATE] Just to be 100% clear, these are 2 separate connections and 2 separate systems... I am trailing this at home and may use it in the office also...

TiernanO
  • 744
  • 7
  • 17
  • sounds like you want to build a Content Delivery Network (CDN) for yourself when really you need is some policy routing and possibly QoS. Can you tell us what/why you're doing that's caused you to go this route? 2x cable modems (assuming typical speeds) should be more than enough for any household. – gravyface Sep 19 '12 at 12:21
  • CDN sounds like something for delivering content... I want to consume it... And even though my modems are fast (250Mb/s down and 20Mb/s up total) the upload to some servers in the states as well and downloads from other places can be slow (backing up to the US is topping out at about 3Mb/s... with 20Mb/s up, i would have expected more. – TiernanO Sep 19 '12 at 12:27

1 Answers1

3

What are you trying to do exactly?

If your internet isn't fit for purpose, change it or upgrade it. It sounds like you've got a myriad of issues.

  1. You haven't mentioned the model of the Netgear router - but its probably very unlikely that it can push 250Mb/s
  2. You are only load balancing the connection at the source of the traffic. So this means you won't get an aggregated total of 250Mb/s - you'll be getting 2x 125Mb/s. If you used the two connections as 2x MPLS L2 connections to a single aggregator "in the cloud" (dare I say it) - then you could utilise 250Mb/s (assuming the 'cloud' machine had connectivity >= 250Mb/s)
  3. You haven't mentioned what you are sending between source and destination. Are you sending single large archives once per day, or are you looking for a real-time performance enhancement? For the former, with enough CPU power - you might have a speed-up by using a very high level of compression at the origin - then decompression at the destination. But there's not much that can do this on-the-fly, as its far too application specific. Perhaps a VPN tunnel with compression enabled?

The crux of it is, if you don't have enough capacity, you don't have enough capacity - you can't get more speed that isn't there.

Ben Lessani
  • 5,244
  • 17
  • 37
  • Ok... I am tweaking the question, because i dont think i asked the correct way... but, the netgear is only connected to a 12.5Mb/s line (dont have the name of hand) but it will be a completely separate connection... it has nothing to do with the 2 cable modems... Your mention of the 2 MPLS L2 connections sounds like what i may be looking at, so i need to look into that... VPN may also do something, but again, looking for something that actually does something with applications before sending though the VPN... Dont get me wrong, i do have enough capacity, its more a latency issue... but thanks! – TiernanO Sep 19 '12 at 16:50
  • MPLS bonding, VPN tunnelling or compression-on-the-fly will only add **more** latency. The only way to lower latency is to make sure you've got 1:1 contention on the line, with optimal routes to your destination subnets (not even an option unless your running BGP). Ie. Just get a better internet connection, designed for whatever purpose you are trying to use it for (eg. a leased line). – Ben Lessani Sep 19 '12 at 16:54
  • Ok, so how do the likes of CloudOpt do stuff like this? I tried it out, and uploading files to S3, using a CloudOpt box in house and one on EC2 made things a bit faster... their magic sauce is in the protocol between machines... still need to look into this MPLS stuff... might be interesting... – TiernanO Sep 19 '12 at 16:58
  • Your capacity and latency is going to be as good as the weakest link. You can't take two lines with 20ms ping round trip and expect it to halve. It will still be 20ms - best case scenario, you've just doubled the width of the pipe (ie. increased capacity). Regardless of where you upload the data in the first instance - this will always be the slowest part ... – Ben Lessani Sep 19 '12 at 17:02
  • I agree with you on that... but, and tell me if i missed something here, but a connection comes into the "box" in the house, or multiple connections, and they get split or combined together, depending on bandwidth available, can be compressed, tweaked, deduplicated, etc, and then sent to the "box" in "the cloud" and un. and moved out to the rest of the internet... for non compressible stuff, there may not be a great improvement, but there may be something... and thats what i am interested in... some sort of improvement at packet layer, not App Layer. – TiernanO Sep 19 '12 at 17:11
  • 2
    That's the traditional world of MPLS bonding. There are service providers that do it for ADSL connections, this is really common. They basically split the data at the source and send it down all connections available, then join it back up on a hosted service. But by nature, the time it takes to split data, re-join it, account for delayed/lost `SYN/ACK`, **will** add to latency. It'll increase the pipe size, but it will slow it down en route. If you want an improvement at the packet layer - just get better, faster internet connectivity. *You can't polish a turd* :) – Ben Lessani Sep 19 '12 at 17:23
  • Thanks for that final comment @sonassi... Server Fault is wanring me not to add more comments... so i think i will leave it at that, but thanks so far. you have given me great insight. I will leave this open and see how things go, and may accept this as an answer if i get nothing else... Thanks again! – TiernanO Sep 19 '12 at 21:46