Yes, it is theoretically possible. Example data to force some form of preference (there could be a simpler example):
rdd1 = sc.range(10).map(lambda x: (x % 4, None)).partitionBy(8)
rdd2 = sc.range(10).map(lambda x: (x % 4, None)).partitionBy(8)
# Force caching so downstream plan has preferences
rdd1.cache().count()
rdd3 = rdd1.union(rdd2)
Now you can define a helper:
from pyspark import SparkContext
def prefered_locations(rdd):
def to_py_generator(xs):
"""Convert Scala List to Python generator"""
j_iter = xs.iterator()
while j_iter.hasNext():
yield j_iter.next()
# Get JVM
jvm = SparkContext._active_spark_context._jvm
# Get Scala RDD
srdd = jvm.org.apache.spark.api.java.JavaRDD.toRDD(rdd._jrdd)
# Get partitions
partitions = srdd.partitions()
return {
p.index(): list(to_py_generator(srdd.preferredLocations(p)))
for p in partitions
}
Applied:
prefered_locations(rdd3)
# {0: ['...'],
# 1: ['...'],
# 2: ['...'],
# 3: ['...'],
# 4: [],
# 5: [],
# 6: [],
# 7: []}