I'm trying to crawl user defined websites but not able to crawl the site where robots.txt is preventing the crawling. That's fine but I want to get the response where I can show to user that "the site you have entered doesn't allow to crawl due to robots.txt".
There are other 3 types of prevention for which I got the code and handling accordingly but only this exception (i.e. prevention by robots.txt) which I cannot handle. So, please let me know if there is any way to handle the case and show the appropriate error message.
I'm using Python 3.5.2 and Scrapy 1.5