5

I have switch providers and have run into some problems with bandwidth limitations. I have more bandwidth then before, but there are performance issues.

The router is connected to a 100mBit port, but they limit it to arbitrary settings (in software I imagine). It seems when I go above the limit, the provider starts to drop packets beyond the limit (This is what they said they do as well). Is it possible the previous provider did something like queuing packets above the this limit before dropping them? Is anyone aware of not only what can be done, but what is typical?

Also, is there anything I can do on my Cisco router to help this situation? It would seem I am pretty helpless if the packets are dropped before they reach my interface (The traffic that is high is inbound to my network).

Kyle Brandt
  • 83,619
  • 74
  • 305
  • 448

3 Answers3

5

Usually it's shaping vs. policing - done on the provider's edge interface to you. See here or here for more info.

Shaping buffers traffic above the limit, policing just drops it. I'm guessing your old ISP may have been shaping and the new one is policing.

You could try shaping your side of the interface so your router is doing the buffering to avoid getting to the point where your provider is dropping your packets.

James
  • 7,643
  • 2
  • 24
  • 33
  • So why put shaping on my outbound interface? IS the idea the slow response rate to ACKs (Slowstart) and such make the other side send the data a little slower? – Kyle Brandt Mar 17 '10 at 01:48
  • @Kyle: No, the idea is taht hte traffic back is probably proportional to the traffic you're sending out and by managing that, you'll end up limiting the return traffic as well. The reason for dropping instead of shaping is to (hopefully) drop the TCP window size and thus manage tre traffic levels. It may be worth asking if your provider is willing to enable WRED on their side, as that MAY be beneficial in trying to tamper the traffic before it hits the hard limit. – Vatine Mar 17 '10 at 16:41
1

This is a very un-high-tech solution, but one I've seen implemented in place of true QoS and traffic shaping capabilities:

If you want to prevent going over your allotted 10mbps, you could use a CAT3 cable to connect your NIC to the switch.


I saw this used when a small lab wanted to limit a server from flooding their DS3 connection - they wanted to ensure it never used more than a quarter of the bandwidth, so they used a CAT3 cable. Low-tech, but effective.

warren
  • 18,369
  • 23
  • 84
  • 135
  • Haha, I love this solution =D – Antoine Benkemoun Mar 17 '10 at 18:43
  • That's unlikely to actually do anything you'd want it to. Ethernet isn't like a V90 modem -- it negotiates either a 10, 100, or 1000mb link. If the physical media isn't able to properly carry that speed, it won't shift down to the next lowest link speed, you'll instead just get media errors like crazy and your network performance will be astonishingly bad. – chris Mar 17 '10 at 21:59
  • perhaps so, but I've seen and done it successfully before :-) – warren Mar 18 '10 at 00:34
0

I would not recommend downgrading your cabling in a real server situation. Although it is a cute idea. You can use mii-tool to set the link properties of your network card. Beware - that some network switches don't like being told to down negotiate and get a bit confused. If your traffic is bursty in nature, then using a provider which supports bursting for short periods might be a better alternative. I'd talk to your hosting provider before messing with any setting like this. They may just offer you a more suitable package!

Not all co-lo's are created equal. Make sure you research your hosts transit providers and how much they have, and more importantly how much you can use. Having a gigabit link to a switch is all well and fine in the data centre but it mean nothing if it's not backed up with adequate internet transit.

The Unix Janitor
  • 2,458
  • 15
  • 13