I want to create a simple script that lets me limit the outgoing speed of an interface somewhere between 56k (modem speed) and 1MBit/s. I found that something along the following does the job:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
But now I would like that the input to my script would only be the "rate". What would be a good way to calculate good latency and burst values if I only know the rate?
The tbf man page says that the minimum burst should be the rate divided by my kernel HZ. This makes sense but it does not help me in finding a formula to calculate a sensible burst value from the rate. Possibly I want my burst value to be a bit bigger than the bare minimum?
And how would I calculate a good latency value? Should the latency change with the speed at all?