I have a project in which I have to crawl a great number of different sites. All of this sites crawling can use the same spider, as I don't need to extract items from its body pages. The approach I thought is to parametrize the domain to be crawled in the spider file, and call the scrapy crawl command passing the domain and starting urls as parameters, so I could avoid generate a single spider for every site (the sites list will increase over time). The idea is to deploy it to a server with scrapyd running, so several questions come to me:
- Is this the best approach I can take?
- If so, is there any concurrency problem if I schedule several times the same spider with different arguments passed?
- If this is not the best approach, and it is better to create one single spider per site... I will have to update the project frecuently. Does a project update affect running spiders?