Before you mark this as duplicate, please read ahead. I have researched much and haven't found anything (although a different question with the same title).
In my project, I want to take a URL from the user and scrape the URL using a Scrapy spider/crawler. I first tried to do this by directly putting the scraper code in my views.py
but the start/stop of twisted.internet.reactor
was causing problems. Another method was to use a scheduling program such as Scrapyd. But the point is that the next operations in the particular view must happen only after the crawler has finished scraping. Scrapyd will only be scheduling the crawler.
Please correct me if I am wrong about the assumption about Scrapyd or there is some API that I could use to track the progress of scraping. If not, suggest what I can do to achieve that. TIA.