19

I am having a lot trouble finding the answer to this question. Let's say I write a dataframe to parquet and I use repartition combined with partitionBy to get a nicely partitioned parquet file. See Below:

df.repartition(col("DATE")).write.partitionBy("DATE").parquet("/path/to/parquet/file")

Now later on I would like to read the parquet file so I do something like this:

val df = spark.read.parquet("/path/to/parquet/file")

Is the dataframe partitioned by "DATE"? In other words if a parquet file is partitioned does spark maintain that partitioning when reading it into a spark dataframe. Or is it randomly partitioned?

Also the why and why not to this answer would be helpful as well.

Adam
  • 313
  • 1
  • 3
  • 11
  • 1
    You will have the same number of partitions as you have the folders with the name `/path/to/parquet/file/DATE=*` – philantrovert Jun 13 '18 at 08:20
  • @philantrovert I was reading about some concerns that this approach causes work to be done on the Driver. For metadata I would imagine that is not an issue - or is it? Also, when using S3, I am assuming the Hive mestatore need not be updated for partitioned parquet access necessarily. Or would you recommend Msck repair table ... always (as they are external tables). Thanks in advance. – thebluephantom Nov 12 '18 at 12:56

3 Answers3

11

The number of partitions acquired when reading data stored as parquet follows many of the same rules as reading partitioned text:

  1. If SparkContext.minPartitions >= partitions count in data, SparkContext.minPartitions will be returned.
  2. If partitions count in data >= SparkContext.parallelism, SparkContext.parallelism will be returned, though in some very small partition cases, #3 may be true instead.
  3. Finally, if the partitions count in data is somewhere between SparkContext.minPartitions and SparkContext.parallelism, generally you'll see the partitions reflected in the dataset partitioning.

Note that it's rare for a partitioned parquet file to have full data locality for a partition, meaning that, even when the partitions count in data matches the read partition count, there is a strong likelihood that the dataset should be repartitioned in memory if you're trying to achieve partition data locality for performance.

Given your use case above, I'd recommend immediately repartitioning on the "DATE" column if you're planning to leverage partition-local operations on that basis. The above caveats regarding minPartitions and parallelism settings apply here as well.

val df = spark.read.parquet("/path/to/parquet/file")
df.repartition(col("DATE"))
bsplosion
  • 2,641
  • 27
  • 38
  • It's been a while since you posted this answer, but do you maybe have the source for this? You just know from the source code or is there any documentation that has this info? – danielsepulvedab Oct 22 '21 at 13:27
  • 1
    @danielsepulvedab I don't believe I was able to find any particular documentation about this, and after just searching again, I'm still not finding anything. I'd experimented pretty thoroughly for a series of projects at the time which fed into this response, but I suppose a caveat is apropos: **this partitioning behavior is subject to arbitrary change, may be implementation-specific, and was written for Spark versions 2.x** – bsplosion Oct 24 '21 at 16:04
  • I see, thank you! :) – danielsepulvedab Oct 25 '21 at 07:41
0

You would get the number of partitions based on the spark config spark.sql.files.maxPartitionBytes which defaults to 128MB. And the data would not be partitioned as per the partition column which was used while writing.

Reference https://spark.apache.org/docs/latest/sql-performance-tuning.html

ravi malhotra
  • 703
  • 5
  • 14
0

In your question, there are two ways we could say the data are being "partitioned", which are:

  1. via repartition, which uses a hash partitioner to distribute the data into a specific number of partitions. If, as in your question, you don't specify a number, the value in spark.sql.shuffle.partitions is used, which has default value 200. A call to .repartition will usually trigger a shuffle, which means the partitions are now spread across your pool of executors.

  2. via partitionBy, which is a method specific to a DataFrameWriter that tells it to partition the data on disk according to a key. This means the data written are split across subdirectories named according to your partition column, e.g. /path/to/parquet/file/DATE=<individual DATE value>. In this example, only rows with a particular DATE value are stored in each DATE= subdirectory.

Given these two uses of the term "partitioning," there are subtle aspects in answering your question. Since you used partitionBy and asked if Spark "maintain's the partitioning", I suspect what you're really curious about is if Spark will do partition pruning, which is a technique used drastically improve the performance of queries that have filters on a partition column. If Spark knows values you seek cannot be in specific subdirectories, it won't waste any time reading those files and hence your query completes much quicker.

  1. If the way you're reading the data isn't partition aware, you'll get a number of partitions something like what's in bsplosion's answer. Spark won't employ partition pruning, and hence you won't get the benefit of Spark automatically ignoring reading certain files to speed things up1.

  2. Fortunately, reading parquet files in Spark that were written with partitionBy is a partition-aware read. Even without a metastore like Hive that tells Spark the files are partitioned on disk, Spark will discover the partitioning automatically. Please see partition discovery in Spark for how this works in parquet.

I recommend testing reading your dataset in spark-shell so that you can easily see the output of .explain, which will let you verify that Spark correctly finds the partitions and can prune out the ones that don't contain data of interest in your query. A nice writeup on this can be found here. In short, if you see PartitionFilters: [], it means that Spark isn't doing any partition pruning. But if you see something like PartitionFilters: [isnotnull(date#3), (date#3 = 2021-01-01)], Spark is only reading in a specific set of DATE partitions, and hence the query execution is usually a lot faster.

1A separate detail is that parquet stores statistics about data in its columns inside of the files themselves. If these statistics can be used to eliminate chunks of data that can't match whatever filtering you're doing, e.g. on DATE, then you'll see some speedup even if the way you read the data isn't partition-aware. This is called predicate pushdown. It works because the files on disk will still contain only specific values of DATE when using .partitionBy. More info can be found here.

dchristle
  • 132
  • 1
  • 6