I have heard mixed results on limiting incoming bandwidth, but this should be possible with the ifb device in the kernel. While one truth is what @voretaq7 said, You can "limit" incoming packets if you accept all input packets and redirect them with redirect or mirroring into one of the "I_ntermediate F_unctional B_lock devices. To those, you can attach any filters normally limited to 'egress' filtering.
This may not sound "helpful", as you have to accept all traffic, into the ifb -- but then you get to decide what traffic comes out of that holding queue "IN" to the rest of your system.
This has the benefit of not dropping packets unless they are lower-priority packets. Certainly ,if you are being DoS'd, the key problem is likely to be that the total traffic inbound is higher than your line can sustain, so trying to affect that with this method is futile. This method would only work on legitimate streams over any desired protocol (TCP, UDP, ICMP, etc...). I.e. if I want to prioritize DNS over bulk downloads, I can do that, however, no matter what traffic algorithm you have, if you have 30Mb/, then with a fastest normal clock interrupt of 1000Hz, you still have to deal with 30Kb of traffic/ clock tick and that's presuming you get called in a timely fashion. That's the main reason why you have to have a high burst rate, because it's hard to handle that much with rate limiting alone.
It would be helpful, if your network card has multiple I/O queues. Many cards out there have 6-12 queues / direction that can provide some "automatic" classifying into separate queues based on the, usually, more limited filtering options on the ethernet card.
What can be more helpful -- if you can divvy up your traffic into those multiple queues, is that you can set processor affinity for the queues. If you are finding yourself limited in CPU handling packets, with the multiQ's then this can help spread out the incoming traffic for processing to different Core's (don't use Hyperthreading -- it will likely cause a perf problem since the threads aren't operating on shared data, but separate data streams. Handling those will be best with cpu's using separate L1 & L2 caches (L3 will still be shared, usually, amongst multiple core's), but you can at least dedicate the L1 & L2 caches to their own "flows").
Due to problems in throughput, I had with single-queues and policing, I gave up ingress control -- and I only had a 5Mb incoming at the time (7Mb now), so I haven't tried to see how effective using multiq and ifb are in ingress shaping. As it is, I generally use application-level shaping & controls -- not ideal, but fairly reliable.
One issue that crops up now and then, is that either due to line problems or ISP congestion, I won't get my max BW, then my fixed filter-presets don't adapt... that's another reason I haven't worked on this issue too hard, as I'm not sure how much work it would require to have the rate limiting be dynamic and sense such problems or issues, and right now have too many other higher priority projects in my queue...(and my I/O rate is far below my ethernet port's)... ;-)