I read this question but it does not seem to give a good reply to my situation (maybe I misread it): Splitting very large csv files into smaller files
I have a large CSV file (1.0 Gb) with lots of individual rows (over 1 million) and 8 columns. Two columns represent date and time while others represent other stock related information (price, etc.). I would like to save individual files separated by date and time attributes. So, if there are 100 different combinations of date-time, I would extract rows for each combination and save is as a separate CSV file under subfolder like C:/Date/Time/filename.csv.
I am currently using pandas to filter each time-date combination to get a dataset with required information for that combination and then saving each file using loops (a list of date-time combinations is looped through). This is taking a very long time.
Is there a better way to accomplish this?
(I will look into multithreading as well but I don't believe it will solve the speed issue significantly).
Thanks!
Adding a sample of input data: