0

I am a newbie network engineer.

I am trying to understand the Linux command tc.

I made a simple network, consisting of two hosts H1, H2 and a switch S1 connecting them by using Mininet.

Then, I made H1 send UDP packets to H2, through switch S1 by using iPerf2.

#H1
iperf -s -p 1212 -f m -i 1

#H2
iperf -c 10.0.0.1 -p 1212 -t 10000 -f m -b 70M -u

For limiting the link bandwidth, I made a simple bash script below.

#!/bin/bash

#s1-eth1 is outgoing port from H1 to H2
#its orginal bandwidth 100Mbit/s

sudo tc qdisc del dev s1-eth1 root 
sudo tc qdisc add dev s1-eth1 root handle 1:0 htb default 12
sudo tc qdisc add dev s1-eth1 parent 1:0 classid 1:1 htb rate 50Mbit
sudo tc filter add deb s1-eth1 protocol ip parent 1:0 prio 1 u32 match ip dport 1212 0xffff flowid 1:1


I expected that the rx rate of S1 became 50Mbit/s but it didn't.

It showed about 40Mbit/s.

When I changed the settings of this experiment, it showed a smaller value than I set by using the tc command.

Why did it happen? I looked over the kernel code of Linux tc but I cannot understand it.

Could you give me a little hint?

kenlukas
  • 3,101
  • 2
  • 16
  • 26
nimdrak
  • 29
  • 1
  • 7

1 Answers1

0

Hints and troubleshooting.

  1. You should understand the difference between queuing, shaping and policing.
  2. You should understand the difference between ingress and egress directions.
  3. Check the classifier (tc -p filter show dev <iface>)
  4. Check the classifier statistics (tc -s -s -d f ls dev <iface>) - 1st step of troubleshooting.
  5. Check the queue discipline statistics (tc -s -s -d qdisc list dev <iface> and tc -s -s -d c ls dev <iface>) - 2nd step of troubleshooting.
  6. Use the estimators to monitor the actual rate from the point of view of the qdisc. You should specify it at qdisc attachment. Some schedulers can create the default estimators, when the kernel module has been loaded with corresponded option (see modinfo sch_htb).
  7. You've specified the default class 1:12, but don't define it.
  8. Read the saga about HTB and examples of configuration.
  9. The QoS isn't a simple thing.
  10. You also can capture the traffic and analyze it with the wireshark. Investigate the window size changing of tcp connections to trace work of the shaper. But for UDP this way isn't much suitable.
Anton Danilov
  • 5,082
  • 2
  • 13
  • 23
  • Thank you so much! I follow your advice right now! I appreciate you:) – nimdrak Jul 29 '19 at 23:17
  • Thanks to you, I tried lots of approach. Finally I got a small result. By using tbf rather than htb, I got a result I wanted. But I don't know why it happen yet. I try and work harder for making it. Thanks again! – nimdrak Jul 30 '19 at 12:54
  • I'll extend the answer with the example tomorrow. – Anton Danilov Jul 30 '19 at 12:58
  • I also know my mis-concept of qdisc and class by reading the saga about HTB like http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm and so on. I really thank you. I think I can solve my problem soon. – nimdrak Jul 31 '19 at 02:00
  • http://luxik.cdi.cz/~devik/qos/htb/old/htbmeas1.htm – nimdrak Jul 31 '19 at 02:11
  • I think I finally solve my problem. My problem was from the setting of tc, especially MSS. After monitoring the flows by using sch_htb and wireshark, I found there is a difference between [tc htb's available MSS size] and [iperf3's MSS size]. After set the MSS of tc, I get the result I wanted. I really appreciate you. I think I am short of the ability to solve the problem. Thank you so much! – nimdrak Jul 31 '19 at 14:47
  • Congratulations! You've gotten a priceless experience :) You're welcome. – Anton Danilov Jul 31 '19 at 15:23
  • I will have in mind methodology you taught. Thank you so much! :) – nimdrak Aug 01 '19 at 00:29