1

I am running this code below using multiprocessing to run ticker_list through a request and parsing program faster. The following code works, but it is very slow. I am not so sure that this is the correct usage of multiprocessing. If there is a more efficient way to do this then please let me know.

ticker_list = []

with open('/home/a73nk-xce/Documents/Python/SharkFin/SP500_Fin/SP_StockChar/ticker.csv', 'r', newline='') as csvfile:
    spamreader = csv.reader(csvfile)
    for rows in spamreader:
        pass

    for eachTicker in rows:
        ticker_list.append(eachTicker)

def final_function(ticker):
    try:
        GetFinData.CashData(ticker_list)


    except Exception as e:
        pass

if __name__ == '__main__':
    jobs = []
    p = mp.Process(target=final_function, args=(ticker_list,))
    jobs.append(p)
    p.start()
    p.join()      
dano
  • 91,354
  • 19
  • 222
  • 219
Aran Freel
  • 3,085
  • 5
  • 29
  • 42

1 Answers1

0

if your ticker file is large (judging by the path name, it may be)

using the csv reader is a waste of time. it does not suport seek, so the only way you can get to the last line is

    for row in spamreader:
        pass

since after this "row" will contain the last row in the file..

you can see here: Most efficient way to search the last x lines of a file in python that it's possible to retrieve only the last lines in the file, and then parse it with the csv module afterward..

This will save some computation time..

Community
  • 1
  • 1
Henrik
  • 2,180
  • 16
  • 29