i was trying to download 1700 (company) datasets of stock market to csv files using yahoo finance, and it is storing them successfully, i did it using while loop, i ran, while loop till 1700 times and it almost took more than 2 hrs, can i use parallel programming in python to save time do it in parallel way?
import pandas_datareader as web
import pandas as pd
import csv
import datetime
count=0;
while count<1700:
df = web.DataReader("TCS.NS", 'yahoo', start,end)
file = open('csv_file.csv')
reader= csv.reader(file)
df.to_csv('csv_file.csv')
df = pd.read_csv('csv_file.csv')
.
.
.
count +=1
I also performing some various operations on data and storing it in MySQL database in while loop. Please help me to for this problem