1

I have a single, large, file. It has 40,955,924 lines and is >13GB. I need to be able to separate this file out into individual files based on a single field, if I were using a pd.DataFrame I would use this:

for k, v in df.groupby(['id']):
    v.to_csv(k, sep='\t', header=True, index=False)

However, I get the error KeyError: 'Column not found: 0' there is a solution to this specific error on Iterate over GroupBy object in dask, but this requires using pandas to store a copy of the dataframe, which I cannot do. Any help on splitting this file up would be greatly appreciated.

Alex
  • 6,610
  • 3
  • 20
  • 38

1 Answers1

3

You want to use apply() for this:

def do_to_csv(df):
    df.to_csv(df.name, sep='\t', header=True, index=False)
    return df

df.groupby(['id']).apply(do_to_csv, meta=df._meta).size.compute()

Note - the group key is stored in the dataframe name - we return back the dataframe and supply a meta; this is not really necessary, but you will need to compute on something and it's convenient to know exactly what that thing is - the final output will be the number of rows written.

mdurant
  • 27,272
  • 5
  • 45
  • 74
  • Thank you for the solution, in the end I used `awk '{ out=$4".txt"; print >> out; close(out) }' file.txt` running overnight. – Alex Aug 09 '18 at 16:16