I am currently trying to save and read information from dask to parquet files. But when trying to save a dataframe with dask "to_parquet" and loading it afterwards again with "read_parquet" it seems like the division information gets lost.
>>df.divisions
(Timestamp('2014-10-01 17:25:17.928000'), Timestamp('2014-10-01 17:27:18.000860'), Timestamp('2014-10-01 17:29:19.000860'), Timestamp('2014-10-01 17:31:19.000860'), Timestamp('2014-10-01 17:33:20.000860'), Timestamp('2014-10-01 17:35:20.763000'), Timestamp('2014-10-01 17:36:12.992860'))
>>df.to_parquet(folder)
>>del df
>>df = dask.dataframe.read_parquet(folder)
>>df.divisions
(None, None, None, None, None, None, None)
Is this intended? My current workaround is to set the index again after loading but that takes a lot of time.
>> df = dask.dataframe.read_parquet(folder,index=False).set_index('timestamp', sorted=True)
>> df.divisions
(Timestamp('2014-10-01 17:25:17.928000'), Timestamp('2014-10-01 17:27:18.000860'), Timestamp('2014-10-01 17:29:19.000860'), Timestamp('2014-10-01 17:31:19.000860'), Timestamp('2014-10-01 17:33:20.000860'), Timestamp('2014-10-01 17:35:20.763000'), Timestamp('2014-10-01 17:36:12.992860'))
Or am I missing some options while saving and loading?