6

How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X.

The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times.

Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

the-wabbit
  • 40,737
  • 13
  • 111
  • 174

3 Answers3

1

I got this to work in hfsc. I assume "X" in your example is 100mbit, but that could be anything of course..

The trick here is to create a tree class like so:

+--------------------------------------------------------------+  +---------------------+
|                                                              |  |        1:1          |
|                            root                              |  |---------------------|
|                              +                               |  | Rate: 100mbit       |
|                              |                               |  | Upper Rate: 100mbit |
|                              |                               |  |                     |
|                              |                               |  |                     |
|                              |                               |  |                     |
|                         +----v------+                        |  +---------------------+
|                         |  1:1      |                        |
|                         |           |                        |  +---------------------+
|                         +--+---+----+                        |  |         1:10        |
|                            |   |                             |  |---------------------|
|                            |   |                             |  | Rate: 100mbit       |
|                            |   |                             |  | Upper Rate: 100mbit |
|                            |   |                             |  |                     |
|          +----------+------+   +--------+----------+         |  |                     |
|          |  1:10    |                   |  1:11    |         |  |                     |
|          |          |                   |          |         |  +---------------------+
|          +----------+                   +----------+         |
|                                                              |  +---------------------+
|                                                              |  |         1:11        |
|                                                              |  |---------------------|
|                                                              |  | Rate: 10kbit        |
+--------------------------------------------------------------+  | Upper Rate: 100mbit |
                                                                  |                     |
                                                                  |                     |
                                                                  |                     |
                                                                  +---------------------+

The magic happens because class 1:10 (default class) is setup to always get a guaranteed bandwidth of 100mbit, whereas the 'slow' class 1:11 is offered a guaranteed bandwidth of only 10kbit bursting to 100mbit.

This forces the root class (1:1) to always honour the needs of 1:10 over 1:11.

Things to note:

  • Dont use iptables CLASSIFY target to put traffic into 1:11. Its really slow at doing classifications. Use traffic control filters instead. Or if you have a number of applications to go in here and the ports can vary to filter, use a cgroup.
  • Set a default target on hfsc to 1:10.
  • You probably should set the 'slow' link to be at least the tcp maximum segment size of your host. This way you you can try to get your sending application stuck in the slow queue to block for long periods of time without the kernel having to renegotiate window sizes and whatnot.

I tested this having two competing applications send data as fast as possible to a neighbouring host over 2 services. Where one of the services was in class 1:11. They both sent 5 seconds worth of traffic over 100mbit (so 60MB of data streamed). When running classless, as expected both finish in 10 seconds (both sharing the link so the time is divided equally).

With this QoS setup, the priority service finished in 5 seconds whereas the low priority service finished in 10 (as if low priority is waiting for high priority to finish first), which I think is what you want.

Matthew Ife
  • 23,357
  • 3
  • 55
  • 72
0

I'm not sure if this will work, but You could try HTB:

  • For the low-priority stream set the rate to zero (or almost zero) and the ceil to the actual maximum X. This results in the low-priority stream having a guaranteed speed of zero and a chance of borrowing a maximum of X MBit/s from other streams.
  • For the other streams, set the rate to the speed of Your network interface.

According to the HTB documentation, this should work. However, I didn't try it myself.

EDIT: This will not limit low-prio traffic while the link has X MBit/s idle bandwidth. But it may be a start...

Black
  • 188
  • 1
  • 13
  • I've already tried this, [kind of](http://pastebin.com/YWRWrrZX). The result was the low-priority stream always getting to the ceiling, no matter the other classes' traffic (as long as it did not exceed the upstream class rate of course). – the-wabbit Apr 03 '12 at 11:20
  • What about setting the upstream class rate to the low-prio limit? (and ceil still at link speed).... just guessing... – Black Apr 03 '12 at 14:35
0

It's awkward, but if you can manually change the limit you could have a daemon averaging across a finer mesh (say, 1 minute intervals, keeping track of the last 5-10). Then you just need a fairly simple control loop where you adjust the traffic limit to keep the 5-minute average a safe amount under your limit. More complicated traffic prediction schemes are optional.

zebediah49
  • 176
  • 7