i've deployed a spider to scrapyd. in development the spider was writing a file to disk. deployed no file is produced. I believe it is a permission problem. i'm looking to ftp the data out. so, solution 1 would be not to write a file at all. is there a way to take a list of dict objects and ftp them out without creating a file first? Or 2: is a temp file a viable option & would the permissions be easier on that. or 3, can i give the scrapyd daemon more privileges?
thanks,
jim
mOutput = dict() dict_list = [] for units in tableRows:
mOutput = {
'model': units.xpath(".//td[1]/text()").get().replace('\xa0',''),
'modelName': units.xpath(".//td[2]/text()").get(),
'oct': units.xpath(".//td[3]/text()").get(),
'nov': units.xpath(".//td[4]/text()").get(),
'dec': units.xpath(".//td[5]/text()").get(),
'jan': units.xpath(".//td[6]/text()").get(),
'feb': units.xpath(".//td[7]/text()").get(),
'mar': units.xpath(".//td[8]/text()").get(),
'apr': units.xpath(".//td[9]/text()").get(),
'may': units.xpath(".//td[10]/text()").get(),
'jun': units.xpath(".//td[11]/text()").get(),
'july': units.xpath(".//td[12]/text()").get(),
'total': units.xpath(".//td[14]/text()").get(),
'shipped': units.xpath(".//td[14]/text()").get()
}
dict_list.append(mOutput)
objToFTP = json.dumps(dict_list)