If we have an uncompressed 320 blocks of HDFS files stored on a 16 data node cluster. Each node with 20 blocks and if we use Spark to read this file into an RDD (without explicitly passing numPartitions when creating an RDD)
textFile = sc.textFile("hdfs://input/war-and-peace.txt")
If we have 16 executors one on each node, how many partitions Spark RDD will create per executor? Will it create one partition per HDFS block i.e. 20 partitions?