I am using scrapyd to run multiple spiders as jobs across the same domain. I assumed scrapy had a hashtable of visited urls that it shared and co-ordinated with other spiders when it crawled. When I create instances of the same spider by
curl http://localhost:6800/schedule.json -d project=projectname -d spider=spidername.
it rather crawls the same urls and duplicate data is being scraped. Has someone dealt with a similar problem before?