9

Consider the following dataframe

import pandas as pd
import numpy as np
import pyarrow.parquet as pq
import pyarrow as pa

idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000', freq = 'T')

dataframe = pd.DataFrame({'numeric_col' : np.random.rand(len(idx)),
                          'string_col' : pd.util.testing.rands_array(8,len(idx))},
                           index = idx)

dataframe
Out[30]: 
                     numeric_col string_col
2017-01-01 12:00:00       0.4069   wWw62tq6
2017-01-01 12:01:00       0.2050   SleB4f6K
2017-01-01 12:02:00       0.5180   cXBvEXdh
2017-01-01 12:03:00       0.3069   r9kYsJQC
2017-01-01 12:04:00       0.3571   F2JjUGgO
2017-01-01 12:05:00       0.3170   8FPC4Pgz
2017-01-01 12:06:00       0.9454   ybeNnZGV
2017-01-01 12:07:00       0.3353   zSLtYPWF
2017-01-01 12:08:00       0.8510   tDZJrdMM
2017-01-01 12:09:00       0.4948   S1Rm2Sqb
2017-01-01 12:10:00       0.0279   TKtmys86
2017-01-01 12:11:00       0.5709   ww0Pe1cf
2017-01-01 12:12:00       0.8274   b07wKPsR
2017-01-01 12:13:00       0.3848   9vKTq3M3
2017-01-01 12:14:00       0.6579   crYxFvlI
2017-01-01 12:15:00       0.6568   yGUnCW6n

I need to write this dataframe into many parquet files. Of course, the following works:

table = pa.Table.from_pandas(dataframe)
pq.write_table(table, '\\\\mypath\\dataframe.parquet', flavor ='spark')

My issue is that the resulting (single) parquet file gets too big.

How can I efficiently (memory-wise, speed-wise) split the writing into daily parquet files (and keep the spark flavor)? These daily files will be easier to read in parallel with spark later on.

Thanks!

Community
  • 1
  • 1
ℕʘʘḆḽḘ
  • 18,566
  • 34
  • 128
  • 235

2 Answers2

7

Making a string columndt based off of the index will then allow you to write out the data partitioned by date by running

pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'], flavor ='spark')

Answer is based off of this source (note, the source incorrectly lists the partition argument as partition_columns)

David
  • 11,245
  • 3
  • 41
  • 46
  • you mean first creating `dataframe['dt'] = dataframe.index.date` ? – ℕʘʘḆḽḘ Jun 12 '18 at 20:32
  • I get `TypeError: __cinit__() got an unexpected keyword argument 'partition_columns'` – ℕʘʘḆḽḘ Jun 12 '18 at 20:43
  • The partitioned column will have to be a string, eg `2017-01-01`, but that doesn't seem to be the problem that the error is pointing to. Does it work if you drop the `flavor ='spark'` option? – David Jun 12 '18 at 20:54
  • It does not, I actually have used `dataframe['dt'] = dataframe.index.date.astype(str) `. I am using `the root_path` as `C://myfolder/` is that ok? – ℕʘʘḆḽḘ Jun 12 '18 at 20:55
  • 1
    Looks like the keyword argument is `partition_cols` not `partition_columns`. I updated the answer and confirmed it works locally – David Jun 12 '18 at 21:02
  • really cool! I wonder if there is the equivalent of parquet `pyarrow` in R. right now I am loading that stuff using `sparklyr`, but could we load these `parquet` files in a regular R session? its a mystery. Thanks anyway! great catch – ℕʘʘḆḽḘ Jun 12 '18 at 21:04
  • 1
    @ℕʘʘḆḽḘ I found it extremely slow. Any chance to use dask for it? – rpanai Jun 12 '18 at 21:15
  • 1
    huuum... I have not tried with my big dataset yet. let me do some testing – ℕʘʘḆḽḘ Jun 12 '18 at 21:16
  • yeah that fails pretty miserably on my 100m obs data... sob sob – ℕʘʘḆḽḘ Jun 12 '18 at 21:52
  • @ℕʘʘḆḽḘ I gave a try with dask (as I used it before to split a dataframe in different parquets) but fot this particular case got an issue see https://github.com/dask/dask/issues/3598. – rpanai Jun 12 '18 at 21:58
  • @ℕʘʘḆḽḘ I partially solved the problem with dask https://github.com/dask/dask/issues/3598#issuecomment-396970081 To save a df with about 10M rows take less than 20 seconds on my machine. Now it will be interesting to see if we can set `flavor='spark'` and name the files accordingly. – rpanai Jun 13 '18 at 15:10
  • @user32185 yes, that is interesting. I am really looking forward to a spark flavored solution here. perhaps you can chime in here as well? https://github.com/apache/arrow/issues/2138 – ℕʘʘḆḽḘ Jun 13 '18 at 15:13
2

The solution presented by David doesn't solve the problem as it generates a parquet file for every index. But this slight modified version does the trick

import pandas as pd
import numpy as np
import pyarrow.parquet as pq
import pyarrow as pa
idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000',
                    freq='T')

df = pd.DataFrame({'numeric_col': np.random.rand(len(idx)),
                   'string_col': pd.util.testing.rands_array(8,len(idx))},
                  index = idx)

df["dt"] = df.index
df["dt"] = df["dt"].dt.date
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'], 
                    flavor='spark')
rpanai
  • 12,515
  • 2
  • 42
  • 64
  • I think he was suggesting the same thing. the issue is to make it work with a very large dataframe... – ℕʘʘḆḽḘ Jun 13 '18 at 15:38
  • Yes but in that case you saved a file for each index now it's a file for every day. If you try this example from 1998-01-01 (which is more than 10M rows) you can save it in about 40 seconds. – rpanai Jun 13 '18 at 15:43
  • interesting but this is actually what i did with my large dataframe and ending up with memory issues... i set dt as index.date.astype(str) – ℕʘʘḆḽḘ Jun 13 '18 at 15:47
  • I think that this is the problem. All the time you use `object` or `string` instead of numeric/datatime you're going to use more memory. Try to give it a try without change type. – rpanai Jun 13 '18 at 15:54