I would like to scrape a web site. It has the following in it's robots.txtfile, but I'm not exactly sure what it is they don't want me to do:
User-agent: *
Disallow: /click
There is no click subdirectory. Or they don't want me to access anything that would normally require clicking (like submitting data via a form)? They sure aren't making it easy in any case - the main page's form GETS to a site that sets a cookie that is read by a third page.