I've stuck into a weird situation when the simplest possible spark application seemingly completed the same job twice.
What I've done
The application itself executes the query:
SELECT date, field1, field2, ..., field10
FROM table1
WHERE field1 = <some number>
AND date BETWEEN date('2018-05-01') AND date('2018-05-30')
ORDER BY 1
and stores the results into HDFS.
Table table1
is a bunch of parquet files stored on HDFS and partitioned as follows
/root/date=2018-05-01/hour=0/data-1.snappy.parquet
/root/date=2018-05-01/hour=0/data-2.snappy.parquet
...
/root/date=2018-05-01/hour=1/data-1.snappy.parquet
...
/root/date=2018-05-02/hour=0/data-1.snappy.parquet
...
etc.
All parquet files are from 700M to 2G size and have the same schema: 10 non null fields of int
or bigint
types.
The result of the application is tiny in size -- a couple of thousand rows only.
My spark application was running on YARN with cluster mode. Base spark parameters were
spark.driver.memory=2g
spark.executor.memory=4g
spark.executor.cores=4
spark.dynamicAllocation.enabled=true
spark.shuffle.service.enabled=true
spark.submit.deployMode=cluster
During execution a couple of containers were preempted, no errors and no failures occured. The whole application completed in one attempt.
The weird thing
Screenshots from Spark UI:
As it can be seen stage 2 and 4 both processed the same number of input rows, but stage 4 also did some shuffling (those were the result rows). The failed tasks are those which containers were preempted.
So it looks like my application processed the same files twice.
I have no clue how's that possible and what happened. Please, help me understand why it Spark is doing such a weird thing.
Actual physical plan:
== Physical Plan ==
Execute InsertIntoHadoopFsRelationCommand InsertIntoHadoopFsRelationCommand hdfs://hadoop/root/tmp/1530123240802-PrQXaOjPoDqCBhfadgrXBiTtfvFrQRlB, false, CSV, Map(path -> /root/tmp/1530123240802-PrQXaOjPoDqCBhfadgrXBiTtfvFrQRlB), Overwrite, [date#10, field1#1L, field0#0L, field3#3L, field2#2L, field5#5, field4#4, field6#6L, field7#7]
+- Coalesce 16
+- *(2) Sort [date#10 ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(date#10 ASC NULLS FIRST, 200)
+- *(1) Project [date#10, field1#1L, field0#0L, field3#3L, field2#2L, field5#5, field4#4, field6#6L, field7#7]
+- *(1) Filter (isnotnull(field1#1L) && (field1#1L = 1234567890))
+- *(1) FileScan parquet default.table1[field0#0L,field1#1L,field2#2L,field3#3L,field4#4,field5#5,field6#6L,field7#7,date#10,hour#11] Batched: true, Format: Parquet, Location: InMemoryFileIndex[hdfs://hadoop/table1], PartitionCount: 714, PartitionFilters: [(date#10 >= 17652), (date#10 <= 17682)], PushedFilters: [IsNotNull(field1), EqualTo(field1,1234567890)], ReadSchema: struct<field0:bigint,field1:bigint,field2:bigint,field3:bigint,field4:int,field5:int,field6:bigint,field7:...
Here are DAGs for Stages 2 and 4: