15

I want to understand what is happening under the hood when I run the following script named t1.py with python3 t1.py. Specifically, I have the following questions:

  1. What kind of code is submitted to the spark worker node? Is it the python code or a translated equivalent Java code submitted to the spark worker node?
  2. Is the add operation in the reduce treated as UDF and thus run in a python subprocess on the worker node?
  3. If the add operation run in a python subprocess on the worker node, does the worker JVM communicates with the python subprocess for each number in a partition being added? If this is the case, it means a lot of overhead.
    #!/home/python3/venv/bin/python3
    #this file is named t1.py   
    import pyspark
    from pyspark.sql import SparkSession
    from pyspark.sql.types import DecimalType, IntegerType
    import pyspark.sql.functions as F
    from operator import add
    import pandas as pd
    from datetime import datetime

    len = int(100000000/1)
    print("len=", len)
    spark = SparkSession.builder.appName('ai_project').getOrCreate()

    start = datetime.now()
    t=spark.sparkContext.parallelize(range(len))
    a = t.reduce(add)
    print(a)
    end= datetime.now()
    print("end for spark rdd sum:", end, end-start)
Charles Ju
  • 1,095
  • 1
  • 9
  • 28

1 Answers1

32

In PySpark, Python and JVM codes live in separate OS processes. PySpark uses Py4J, which is a framework that facilitates interoperation between the two languages, to exchange data between the Python and the JVM processes.

When you launch a PySpark job, it starts as a Python process, which then spawns a JVM instance and runs some PySpark specific code in it. It then instantiates a Spark session in that JVM, which becomes the driver program that Spark sees. That driver program connects to the Spark master or spawns an in-proc one, depending on how the session is configured.

When you create RDDs or Dataframes, those are stored in the memory of the Spark cluster just as RDDs and Dataframes created by Scala or Java applications. Transformations and actions on them work just as they do in JVM, with one notable difference: anything, which involves passing the data through Python code, runs outside the JVM. So, if you create a Dataframe, and do something like:

df.select("foo", "bar").where(df["foo"] > 100).count()

this runs entirely in the JVM as there is no Python code that the data must pass through. On the other side, if you do:

a = t.reduce(add)

since the add operator is a Python one, the RDD gets serialised, then sent to one or more Python processes where the reduction is performed, then the result is serialised again and returned to the JVM, and finally transferred over to the Python driver process for the final reduction.

The way this works (which coves your Q1) is like this:

  • each Spark JVM executor spawns a new Python subprocess running a special PySpark script
  • the Python driver serialises the bytecode that has to be executed by each Spark task (e.g., the add operator) and pickles it together with some additional data
  • the JVM executor serialises its RDD partitions and sends them over to its Python subprocess together with the serialised Python bytecode, which it received from the driver
  • the Python code runs over the RDD data
  • the result is serialised back and sent to the JVM executor

The JVM executors use network sockets to talk to the Python subprocesses they spawn and the special PySpark scripts they launch run a loop whose task is to sit there and expect serialised data and bytecode to run.

Regarding Q3, the JVM executors transfer whole RDD partitions to the Python subprocess, not single items. You should strive to use Pandas UDFs since those can be vectorised.

If you are interested in the details, start with the source code of python/pyspark/rdd.py and take a look at the RDD class.

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • Great answer. However I wonder why the add operator not mapped to a built-in JVM operation and thus run in JVM? It is a standard operator. – Charles Ju May 15 '20 at 13:07
  • 5
    The `add` operator in Python is polymorphic. When you do something like `t.reduce(_ + _)` in Scala, the compiler knows the types of the two arguments of `+` because `t` itself is `RDD[Int]`, so it produces a lambda function that takes two `Int`s and sums them. In Python, you'll have to somehow analyse the RDD and use some kind of type and operator matching to emit proper JVM code... I guess it is more effort with little practical benefit. – Hristo Iliev May 15 '20 at 13:38
  • There could also be complicated functions and it makes no sense in that case to convert whole function to JVM operation, it's easier current way. Great Explanation @Hristo'away'Iliev – Vinay Emmaadii Mar 01 '21 at 01:30
  • @Hristo'away'Iliev I think for Q3, if i am not wrong with operations like mapPartitions , entire RDD partition is sent to python subprocess, but if you are using normal operations like map it will send it as single items. Please let me know if I am wrong – Vinay Emmaadii Mar 01 '21 at 02:57
  • @VinayEmmaadii all RDD operations in PySpark that involve transition between JVM and Python are implemented using `mapPartitions` or `mapPartitionsWithIndex`. It's right there in `rdd.py`. – Hristo Iliev May 17 '21 at 10:14
  • @HristoIliev Really detailed answer, is there a reference of your answer ? Thanks !! – Pro_gram_mer Aug 16 '21 at 07:17
  • @Pro_gram_mer the reference is the source code of `rdd.py`. There is a link to it in the last sentence of my answer. – Hristo Iliev Aug 17 '21 at 09:17
  • @HristoIliev,Hi this is a question that I posted on stackoverflow :https://stackoverflow.com/questions/68378777/pyspark-pandas-udf-slower-than-single-thread , The problem is I use pandas_udf is actually slower than single thread in python ... , and it's definitely unexpacting to me ... Can you please take a look ,Thanks ! – Pro_gram_mer Aug 23 '21 at 08:22
  • @HristoIliev can you provide link to the code that spawns a JVM instance and runs the Java side of py4J? I looked at this example: https://www.py4j.org/index.html. I can't ask my user to write that piece of Java code and run it before using my python wrapper. It should happen invisibly to the user. thanks. – morpheus Aug 20 '23 at 03:25
  • @morpheus I'm not sure what you are trying to achieve, but the part that launches the JVM gateway in PySpark is here: https://github.com/apache/spark/blob/master/python/pyspark/java_gateway.py#L36 – Hristo Iliev Aug 21 '23 at 10:40