On the contrary there's a very easy way around it.
As has been said most whois authorities will throttle (or even block) your traffic if they deem that your making too many requests in a 24 hour period, instead you might want to consider logging in to the ftp site of any of the whois providers worldwide and downloading the various bits of the database, then writing (or finding) your own script to process them.
I currently do that with one of my own servers, which connects using the following shell script (once every 24 hours):
#!/bin/bash
rm -f delegated-afrinic-latest
rm -f delegated-lacnic-latest
rm -f delegated-arin-latest
rm -f delegated-apnic-latest
rm -f delegated-ripencc-latest
rm -f ripe.db.inetnum
rm -f apnic.db.inetnum
rm -f ripe.db.inetnum.gz
rm -f apnic.db.inetnum.gz
wget ftp://ftp.afrinic.net/pub/stats/afrinic/delegated-afrinic-latest
wget ftp://ftp.lacnic.net/pub/stats/lacnic/delegated-lacnic-latest
wget ftp://ftp.arin.net/pub/stats/arin/delegated-arin-latest
wget ftp://ftp.apnic.net/pub/stats/apnic/delegated-apnic-latest
wget ftp://ftp.ripe.net/ripe/stats/delegated-ripencc-latest
wget ftp://ftp.ripe.net/ripe/dbase/split/ripe.db.inetnum.gz
ftp -n -v ftp.apnic.net <<END
user anonymous anonymous@anonymous.org
binary
passive
get /apnic/whois-data/APNIC/split/apnic.db.inetnum.gz apnic.db.inetnum.gz
bye
END
gunzip ripe.db.inetnum
gunzip apnic.db.inetnum
I then have a custom written program that parses the files out into a custom database structure which my servers then do their queries from.
Since all the servers mirror each others data, then you should be able to get a full data set from one server, but if not, then it wouldn't take much to modify the above shell script to download the data from the other servers, all of them respond to 'ftp.????' and have the same universal folder structure.
I can't help you with the parser however as that contains proprietary code, but the file format (esp if you get the split files) is identical to what you see in a typical whois output so it's very easy to work with.
There is a parser on google-code (That's where I got the download script) called 'ip-country' (I think) it's designed to allow you to build your own whois database, the one I've built is slightly more complicated as it's combined with other data too (Hence why my parser is proprietary)
By downloading and processing your own data like that, you get around any limit imposed by the providers, and the upshot is that it's most likely way faster to query your own data store than keep firing off requests from your server to the query servers every time someone enters an IP address.