First of all - many thanks in advance. I really appreciate it all.
So I'm in need for crawling a small amount of urls rather constantly (around every hour) and get specific data
A PHP site will be updated with the crawled data, I cannot change that
I've read this solution: Best solution to host a crawler? which seems to be fine and has the upside of using cloud services if you want something to be scaled up.
I'm also aware of the existence of Scrapy
Now, I winder if there is a more complete solution to this matter without me having to set all these things up. It seems to me that it's not a very distinguish problem that I'm trying to solve and I'd like to save time and have some more complete solution or instructions. I would contact the person in this thread to get more specific help, but I can't. (https://stackoverflow.com/users/2335675/marcus-lind)
Currently running Windows on my personal machine, trying to mess with Scrapy is not the easiest thing, with installation problems and stuff like that.
Do you think there is no way avoiding this specific work? In case there isn't, how do I know if I should go with Python/Scrapy or Ruby On Rails, for example?