7

I'm trying to emulate slow net link with command tc. I use netem to emulate delay and packet loss and htb to emulate narrow bandwidth, but I find there is a limit option in netem, what does this option do? will it affect the final bandwidth?

I googled it and find something in http://manpages.ubuntu.com/manpages/raring/man8/tc-netem.8.html

which says:

limits the effect of selected options to the indicated number of next packets.

But I still can not understand what it does.

Matthew Green
  • 10,161
  • 4
  • 36
  • 54
Daniel Dai
  • 1,019
  • 11
  • 24
  • 1
    I agree this is puzzling. I find the manpage rather clear, but netem does not do what I would expect: ´limit 10 loss 100%' should drop the next 10 packet only. But in fact all packets are dropped. For me ´limit´ has no effect. – Johannes Overmann Mar 12 '14 at 10:14

3 Answers3

7

I don't know exactly what netem is doing, but I've found that if you don't set "limit" to a higher value, netem doesn't work correctly - i.e. it discards packets at higher speeds and possibly has other problems, essentially not accurately emulating a real network.

From the mailing list mentioned by CarlH, Stephen Hemminger said:

The limit value is in packets at least when using the default qdisc inside netem (tfifo). You can also use pfifo and configure it for packet limit, or bfifo same only bytes. The value 1000 is low, you want about 50% more than the max packet rate * delay, unless you are trying to emulate a router with a small queue.

So for a 1 Gbps link, 1 Gbps / 1500 bytes MTU * 100 ms * 1.5 = 12500.

Command:

sudo tc qdisc add dev eth1 root netem limit 12500 delay 100ms loss 1%

I've been using limit 100000, which seems to work fine, but it seems a lower value may be fine.

Peter Tseng
  • 13,613
  • 4
  • 67
  • 57
  • 2
    Well that math works if you have only maximal size packets. The network average being around 1000B, you would need 18750. And with minimal size packets, 234 375 (keeping the 50% ratio). The idea is very simple, if you implement a delay (as you add with the delay argument), those packets have to "wait" somewhere. The number we computed is the maximal number of packets that could be waiting at anytime. You need to prepare enough "slots". That's why if the limit is not high enough, you'll start dropping packet (independently of the loss parameter). – MappaM Apr 03 '20 at 14:16
2

From https://lists.linuxfoundation.org/pipermail/netem/2007-March/001091.html

The "limit" parameter refers to the number of buffers allocated in the netem module.

The limit must be adjusted to support the number of frames delayed (500ms for e.g.) at a given data rate.

Yours sincerely,

Laurent MARIE

CarlH
  • 568
  • 5
  • 16
  • Any idea if that means to size it like TCP buffers, i.e. buffer size = 2 * delay * bandwidth? I've also heard that it should be bandwidth / MTU. – Peter Tseng Jul 09 '16 at 00:56
  • Sorry, I have to vote this down. The followup that @PeterTseng quotes above is from the netem author and I assume it is the correct one, which makes this answer wrong. – K Erlandsson Feb 11 '19 at 19:17
0

The updated documentation says:

limit packets
maximum number of packets the qdisc may hold queued at a time.

Cauchy Schwarz
  • 747
  • 3
  • 10
  • 27