I am creating a tool for Web Developers which will 'scan' the HTML on their site.
This requires my Asp.Net application to download a page from their site.
To protect from abuse, I want to make sure I obey robots.txt, among other methods.
Does HttpWebRequest do this already? Or is there an open source implementation I can use to validate a robots.txt file given a user-agent string?