5

I want to limit incoming (download) speed for Linux box.

Both, the box, which is configured, and trafic source (HTTP server) are connected to the same switch, if shaping is not configured, download speed is 30MBps

I use tc according to http://lartc.org/lartc.html

########## downlink #############
# slow downloads down to somewhat less than the real speed  to prevent 
# queuing at our ISP. Tune to see how high you can set it.
# ISPs tend to have *huge* queues to make sure big downloads are fast
#
# attach ingress policer:

/sbin/tc qdisc add dev $DEV handle ffff: ingress

# filter *everything* to it (0.0.0.0/0), drop everything that's
# coming in too fast:

/sbin/tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \
   0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1

But, effective download speed is much less, than configured. Here are results of my experiments

set rate, KBps: real rate, KBps

  • 32 KBps: 30 KBps
  • 64 KBps: 50 KBps
  • 128 KBps: 106 KBps
  • 256 KBps: 160 KBps
  • 512 KBps: 210 KBps
  • 1024 KBps: 255 KBps

For small bandwidth shaping works quite fine, but on 1024 KBit effective bitrate is 75% less, than expected.

Is is possible to effectively limit incoming bandwidth?

andreikop
  • 53
  • 1
  • 1
  • 3
  • 4
    Capitalization Matters: KB (Kilo ***BYTE***) != Kb (Kilo ***BIT***). Which units do you mean to use here? (Your firewall rules are clearly in KiloBIT/sec, your post is quoting speeds in KiloBYTE/sec) – voretaq7 Nov 26 '13 at 16:47
  • Yes, it is kiloBYTES, I converted the values – andreikop Nov 27 '13 at 08:01

3 Answers3

5

bw is lower than expected

I think you have to increase burst as well correspondingly.

Is is possible to effectively limit incoming bandwidth?

I'd say you surely can achieve similar effect dropping packets, instead of receving them. For protos like TCP, which have bandwidth self-tuning mechanisms, it would effectively work. Take a look at http://www.linuximq.net/faq.html

poige
  • 9,448
  • 2
  • 25
  • 52
  • 3
    Thanks! With bigger burst (Tc=125 ms) I have HTTP download speed 88% of configured either if limit is set on a server and or a client. [Some info](http://networklessons.com/quality-of-service/qos-traffic-shaping-explained/) about burst and Tc – andreikop Nov 27 '13 at 13:42
3

Is is possible to effectively limit incoming bandwidth?

NO.

Trying to limit incoming bandwidth is basically trying to limit the flow of a firehose by holding up a board with a hole drilled in it: You will reduce the amount of water that hits you, but you're still being hit by the firehose.

Carrying the firehose analogy further, if you need 100 gallons of water but limit the rate at which it's getting to you (by holding up the board with the hole in it) you're still bearing the brunt of the force of the firehose (traffic coming down your pipe), but not getting most of that water (because only what happens to go through the hole reaches you -- The rest is dropped on the floor by your firewall board).

The effect of blocking all that water is that it takes longer to fill your 100 gallon bucket.
The effect of blocking TCP packets with a firewall is a little worse, because you trigger the remote host's congetion control algorithm which in an ideal world makes it turn down the pressure on the firehose, sometimes substantially lower than you would like it to.

Incidentally this is also why a local firewall can't save you from DoS attacks - you still have to deal with all the traffic, even if it's just to make the decision to ignore it. A DoS attack is unlikely to honor congestion control procedures for obvious reasons.

voretaq7
  • 79,879
  • 17
  • 130
  • 214
  • 1
    It seems like speed control algorithm works quite fine, if burst is not too small. See comment for anser by **poige** – andreikop Nov 27 '13 at 13:47
0

I have heard mixed results on limiting incoming bandwidth, but this should be possible with the ifb device in the kernel. While one truth is what @voretaq7 said, You can "limit" incoming packets if you accept all input packets and redirect them with redirect or mirroring into one of the "I_ntermediate F_unctional B_lock devices. To those, you can attach any filters normally limited to 'egress' filtering.

This may not sound "helpful", as you have to accept all traffic, into the ifb -- but then you get to decide what traffic comes out of that holding queue "IN" to the rest of your system.

This has the benefit of not dropping packets unless they are lower-priority packets. Certainly ,if you are being DoS'd, the key problem is likely to be that the total traffic inbound is higher than your line can sustain, so trying to affect that with this method is futile. This method would only work on legitimate streams over any desired protocol (TCP, UDP, ICMP, etc...). I.e. if I want to prioritize DNS over bulk downloads, I can do that, however, no matter what traffic algorithm you have, if you have 30Mb/, then with a fastest normal clock interrupt of 1000Hz, you still have to deal with 30Kb of traffic/ clock tick and that's presuming you get called in a timely fashion. That's the main reason why you have to have a high burst rate, because it's hard to handle that much with rate limiting alone.

It would be helpful, if your network card has multiple I/O queues. Many cards out there have 6-12 queues / direction that can provide some "automatic" classifying into separate queues based on the, usually, more limited filtering options on the ethernet card.

What can be more helpful -- if you can divvy up your traffic into those multiple queues, is that you can set processor affinity for the queues. If you are finding yourself limited in CPU handling packets, with the multiQ's then this can help spread out the incoming traffic for processing to different Core's (don't use Hyperthreading -- it will likely cause a perf problem since the threads aren't operating on shared data, but separate data streams. Handling those will be best with cpu's using separate L1 & L2 caches (L3 will still be shared, usually, amongst multiple core's), but you can at least dedicate the L1 & L2 caches to their own "flows").

Due to problems in throughput, I had with single-queues and policing, I gave up ingress control -- and I only had a 5Mb incoming at the time (7Mb now), so I haven't tried to see how effective using multiq and ifb are in ingress shaping. As it is, I generally use application-level shaping & controls -- not ideal, but fairly reliable.

One issue that crops up now and then, is that either due to line problems or ISP congestion, I won't get my max BW, then my fixed filter-presets don't adapt... that's another reason I haven't worked on this issue too hard, as I'm not sure how much work it would require to have the rate limiting be dynamic and sense such problems or issues, and right now have too many other higher priority projects in my queue...(and my I/O rate is far below my ethernet port's)... ;-)

Astara
  • 119
  • 1
  • 6