5

I'm running a Linux server accessible to a wide range of users, and (due to some policies in place by my upstream Internet provider) I need to limit the total amount of data a user can transfer to some given amount. The Linux box is the gateway to my provider. Is there a way to do this?

I have a working iptables setup already in place on the box, if that helps, and I have some experience configuring things like HTB. The problem with the setups I've done in the past is they limit users to a particular bitrate (e.g. 20kbps), rather than total transfer amount over a larger period of time (e.g. 100MB/day).

Tim
  • 1,158
  • 1
  • 14
  • 23

2 Answers2

3

@WerkkreW got me on the right track. The solution I'll be going with is to use squid and its delay_pools feature.

The basic concept is to set up per-host (class 3) delay pools in squid.conf, then set each of them to the maximum amount of bandwidth I want to allow per user per day. Then, set the "fill" rate to be that maximum amount of bandwidth divided by one day's worth of time, so each delay pool will fill completely in one day.

Finally, using iptables I'll transparently redirect incoming requests on port 80 from my LAN to squid rather than DNATing them directly, so that users on the internal network are subjected to the bandwidth restrictions.

Thanks again, WerkkreW, for pointing me in the right direction.

Tim
  • 1,158
  • 1
  • 14
  • 23
2

I believe you could do what you want using iptables and, more appropriately, squid (delay-cache?), but it might be complicated to configure and manage. It is definitely possible though, with squid to limit transfer on a per-user basis. I have never done it personally though, so I can't really offer any advice as to the specifics of setting it up.

There are some other tools you might look at, but most of them do a lot of other filtering, and are very full featured, you might consider them a bit bloated if all you want to do is limit transfer.

WerkkreW
  • 5,969
  • 3
  • 24
  • 32