7

How do I store custom metadata to a ParquetDataset using pyarrow?

For example, if I create a Parquet dataset using Dask

import dask
dask.datasets.timeseries().to_parquet('temp.parq')

I can then read it using pyarrow

import pyarrow.parquet as pq
dataset = pq.ParquetDataset('temp.parq')

However, the same method I would use for writing metadata for a single parquet file (outlined in How to write Parquet metadata with pyarrow?) does not work for a ParquetDataset, since there is no replace_schema_metadata function or similar.

I think I would probably like to write a custom _custom_metadata file, as the metadata I'd like to store pertain to the whole dataset. I imagine the procedure would be something similar to:

meta = pq.read_metadata('temp.parq/_common_metadata')
custom_metadata = { b'type': b'mydataset' }
merged_metadata = { **custom_metadata, **meta.metadata }
# TODO: Construct FileMetaData object with merged_metadata
new_meta.write_metadata_file('temp.parq/_common_metadata')
Dahn
  • 1,397
  • 1
  • 10
  • 29
  • 1
    You can convert the Parquet schema to an Arrow schema (`dataset.schema.to_arrow_schema()`), and pass that to `pq.write_metadata`? Any metadata set in the Arrow schema will be preserved in the Parquet FileMetaData. – joris Sep 10 '21 at 14:14
  • @joris thank you, that was indeed helpful, however, I think my original question was a bit misleading. I have now updated it with a hopefully clearer description of my problem. – Dahn Sep 21 '21 at 07:32

1 Answers1

2

One possibility (that does not directly answer the question) is to use dask.

import dask

# Sample data
df = dask.datasets.timeseries()

df.to_parquet('test.parq', custom_metadata={'mymeta': 'myvalue'})

Dask does this by writing the metadata to all the files in the directory, including _common_metadata and _metadata.

from pathlib import Path
import pyarrow.parquet as pq

files = Path('test.parq').glob('*')

all([b'mymeta' in pq.ParquetFile(file).metadata.metadata for file in files])
# True
Dahn
  • 1,397
  • 1
  • 10
  • 29