How does wide transformations actually work based on shuffle partitions configuration?
If I have following program:
spark.conf.set("spark.sql.shuffle.partitions", "5")
val df = spark
.read
.option("inferSchema", "true")
.option("header", "true")
.csv("...\input.csv")
df.sort("sal").take(200)
Does it mean sort would output 5 new partitions(as configured), and then spark takes 200 records from those 5 partitions?