12

I am able to write to parquet format and partitioned by a column like so:

jobname = args['JOB_NAME']
#header is a spark DataFrame
header.repartition(1).write.parquet('s3://bucket/aws-glue/{}/header/'.format(jobname), 'append', partitionBy='date')

But I am not able to do this with Glue's DynamicFrame.

header_tmp = DynamicFrame.fromDF(header, glueContext, "header")
glueContext.write_dynamic_frame.from_options(frame = header_tmp, connection_type = "s3", connection_options = {"path": 's3://bucket/output/header/'}, format = "parquet")

I have tried passing the partitionBy as a part of connection_options dict, since AWS docs say for parquet Glue does not support any format options, but that didn't work.

Is this possible, and how? As for reasons for doing it this way, I thought it was needed for job bookmarking to work, as that is not working for me currently.

stewart99
  • 14,024
  • 7
  • 27
  • 42

2 Answers2

13

From AWS Support (paraphrasing a bit):

As of today, Glue does not support partitionBy parameter when writing to parquet. This is in the pipeline to be worked on though.

Using the Glue API to write to parquet is required for job bookmarking feature to work with S3 sources.

So as of today it is not possible to partition parquet files AND enable the job bookmarking feature.

Edit: today (3/23/18) I found in the documentations:

glue_context.write_dynamic_frame.from_options(
frame = projectedEvents,
connection_options = {"path": "$outpath", "partitionKeys": ["type"]},
format = "parquet")

That option may have always been there and both myself and the AWS support person missed it, or it was only added recently. Either way, it seems like it is possible now.

stewart99
  • 14,024
  • 7
  • 27
  • 42
  • 10
    Here is the quote from the most recent glue documentation: "Until recently the only way to write a DynamicFrame into partitions was to convert it to a Spark SQL DataFrame before writing. However, DynamicFrames now support native partitioning using a sequence of keys, using the partitionKeys option when creating a sink.". So YES, it was added just recently – Alex Skorokhod Apr 19 '18 at 03:19
  • 1
    Still "partitionKeys": ["type"] feature is useless ...you will get an empty folder if you specify a partitioning key in this option. And if you remove this option then only your DF will get written in S3 with default no of partitions i.e. 200. – Raj Mar 17 '19 at 10:35
  • Does it write in append mode? – whatsinthename Oct 22 '21 at 07:57
  • I used this newly added partitionKeys option and could write all data from the dynamic frame into SE folder in parquet format. The writing of data honoured the partitionKeys option as data is in folders created due to partitioning. However, when I try to run queries in Athena to retrieve same data, I found Athena always get empty dataset. Seems like the table created in Data catelog is not able to understand that the data in S3 is in partitioned format already. Am I missing something? – Amol Dixit Sep 06 '22 at 02:42
5

I use some of the columns from my dataframe as the partionkeys object:

glueContext.write_dynamic_frame \
    .from_options(
        frame = some_dynamic_dataframe, 
        connection_type = "s3", 
        connection_options =  {"path":"some_path", "partitionKeys": ["month", "day"]},
        format = "parquet")
Dan K
  • 51
  • 1
  • 1