We have a web service that we want to protect against malicious attacks to make lots of requests to effectively get all the data out of it.
We have some level of protection with tokens that signed and exchanged, but a determined attacker could get these tokens and replay them on requests to our web service.
So I'm thinking the only protection is at the server level. One thought was to implement a request threshold in a specific time interval them block for duration of time, that grows if subsequent requests are made during the blackout period. After repeated attempts, completely blacklist.
However, I hate the idea of custom rolling our own solution and even using IPs at all since one bad user behind a proxy gets everyone else behind that proxy blocked.
What are the best practices for protecting a web service?
Update: To clarify, this is a general question about protection a web service against mass harvesting of data.