I have created a script to run two spiders in the same process and generate desired output. If the first spider completes crawling before the second, I get the desired output. However, if the second spider finishes execution before the first one, the script is terminated without waiting for the first spider to complete crawling. What could be the reason? What modification should I do in my code?
from scrapy.utils.project import get_project_settings
from scrapy.crawler import CrawlerProcess
setting = get_project_settings()
process = CrawlerProcess(setting)
for spider_name in process.spider_loader.list():
setting['FEED_FORMAT'] = 'json'
setting['LOG_LEVEL'] = 'INFO'
setting['FEED_URI'] = spider_name+'.json'
setting['LOG_FILE'] = spider_name+'.log'
process = CrawlerProcess(setting)
print("Running spider %s" % spider_name)
process.crawl(spider_name)
process.start()
print("Completed")