So first and foremost, I wouldn't worry about getting into distributed crawling and storage, because as the name suggests: it requires a decent number of machines for you to get good results. Unless you have a farm of computers, then you won't be able to really benefit from it. You can build a crawler that gets 300 pages per second and run it on a single computer with 150 Mbps connection.
The next thing on the list is to determine where is your bottleneck.
Benchmark Your System
Try to eliminate MS SQL:
- Load a list of, say, 1000 URLs that you want to crawl.
- Benchmark how fast you can crawl them.
If 1000 URLs doesn't give you a large enough crawl, then get 10000 URLs or 100k URLs (or if you're feeling brave, then get the Alexa top 1 million). In any case, try to establish a baseline with as many variables excluded as possible.
Identify Bottleneck
After you have your baseline for the crawl speed, then try to determine what's causing your slowdown. Furthermore, you will need to start using multitherading, because you're i/o bound and you have a lot of spare time in between fetching pages that you can spend in extracting links and doing other things like working with the database.
How many pages per second are you getting now? You should try and get more than 10 pages per second.
Improve Speed
Obviously, the next step is to tweak your crawler as much as possible:
- Try to speed up your crawler so it hits the hard limits, such as your bandwidth.
- I would recommend using asynchronous sockets, since they're MUCH faster than blocking sockets, WebRequest/HttpWebRequest, etc.
- Use a faster HTML parsing library: start with HtmlAgilityPack and if you're feeling brave then try the Majestic12 HTML Parser.
- Use an embedded database, rather than an SQL database and take advantage of the key/value storage (hash the URL for the key and store the HTML and other relevant data as the value).
Go Pro!
If you've mastered all of the above, then I would suggest you try to go pro! It's important that you have a good selection algorithm that mimics PageRank in order to balance freshness and coverage: OPIC is pretty much the latest and greatest in that respect (AKA Adaptive Online Page Importance Computation). If you have the above tools, then you should be able to implement OPIC and run a fairly fast crawler.
If you're flexible on the programming language and don't want to stray too far from C#, then you can try the Java-based enterprise level crawlers such as Nutch. Nutch integrates with Hadoop and all kinds of other highly scalable solutions.