5

I have a large csv file, that I cannot load into a DataFrame using read_csv() due to memory issues. However in the first column of the csv there is a {0,1} flag, and I only need to load the rows with a '1', which will easily be small enough to fit in a DataFrame. Is there any way to load the data with a condition, or to manipulate the csv prior to loading it (similar to grep)?

ibav
  • 171
  • 1
  • 10

1 Answers1

8

You can use pd.read_csvs the comment parameter and set it to '0'

import pandas as pd
from io import StringIO

txt = """col1,col2
1,a
0,b
1,c
0,d"""

pd.read_csv(StringIO(txt), comment='0')

   col1 col2
0     1    a
1     1    c

You can also use chunksize to turn pd.read_csv into an iterator and process it with query and pd.concat
NOTE: As the OP pointed out, chunk size of 1 isn't realistic. I used it for demonstration purposes only. Please increase it to suit individual needs.

pd.concat([df.query('col1 == 1') for df in pd.read_csv(StringIO(txt), chunksize=1)])
# Equivalent to and slower than... use the commented line for better performance
# pd.concat([df[df.col1 == 1] for df in pd.read_csv(StringIO(txt), chunksize=1)])

   col1 col2
0     1    a
2     1    c
piRSquared
  • 285,575
  • 57
  • 475
  • 624
  • 2
    @juanpa.arrivillaga what are you talking about, just as it was intended to be used :-) – piRSquared Apr 17 '17 at 23:35
  • I tried to generalize the problem, but in reality the filter I'm using a string in the middle of the document. The comment field answered my question exactly, but is limited in that it only works on a single character at the beginning of the file. The concat solution works perfectly, although I increased the chunksize - 1 was too slow. I also needed to add low_memory = False to get around some data_type issues. mtms = pd.concat([df.query('Pool=="FX"') for df in pd.read_csv(mtms, chunksize=1000, low_memory = False)]) – ibav Apr 18 '17 at 14:47
  • @ibav Yes! Please increase the chunk size. `1` was for demonstration purposes. – piRSquared Apr 18 '17 at 14:56