2

I am new to PySpark and I encounter a configuration problem in using it.

I tried to create a dataframe using the below code snippet:

from pyspark.sql import SparkSession

# Create a SparkSession object
spark = SparkSession.builder.appName("CreateDataFrame").getOrCreate()

# Use the SparkSession object to create a DataFrame
df_day_of_week = spark.createDataFrame([(0, "Sunday"), (1, "Monday"), (2, "Tuesday"), (3, "Wednesday"), (4, "Thursday"), (5, "Friday"), (6, "Saturday")], ["day_of_week_num", "day_of_week"])

# Show the DataFrame
df_day_of_week.show()

Below is the error:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
Cell In[2], line 10
      7 df_day_of_week = spark.createDataFrame([(0, "Sunday"), (1, "Monday"), (2, "Tuesday"), (3, "Wednesday"), (4, "Thursday"), (5, "Friday"), (6, "Saturday")], ["day_of_week_num", "day_of_week"])
      9 # Show the DataFrame
---> 10 df_day_of_week.show()

File c:\Users\User\anaconda3\envs\data_streaming\lib\site-packages\pyspark\sql\dataframe.py:899, in DataFrame.show(self, n, truncate, vertical)
    893     raise PySparkTypeError(
    894         error_class="NOT_BOOL",
    895         message_parameters={"arg_name": "vertical", "arg_type": type(vertical).__name__},
    896     )
    898 if isinstance(truncate, bool) and truncate:
--> 899     print(self._jdf.showString(n, 20, vertical))
    900 else:
    901     try:

File c:\Users\User\anaconda3\envs\data_streaming\lib\site-packages\py4j\java_gateway.py:1322, in JavaMember.__call__(self, *args)
   1316 command = proto.CALL_COMMAND_NAME +\
   1317     self.command_header +\
   1318     args_command +\
   1319     proto.END_COMMAND_PART
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:
   1326     if hasattr(temp_arg, "_detach"):

File c:\Users\User\anaconda3\envs\data_streaming\lib\site-packages\pyspark\errors\exceptions\captured.py:169, in capture_sql_exception..deco(*a, **kw)
    167 def deco(*a: Any, **kw: Any) -> Any:
    168     try:
--> 169         return f(*a, **kw)
    170     except Py4JJavaError as e:
    171         converted = convert_exception(e.java_exception)

File c:\Users\User\anaconda3\envs\data_streaming\lib\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325 if answer[1] == REFERENCE_TYPE:
--> 326     raise Py4JJavaError(
    327         "An error occurred while calling {0}{1}{2}.\n".
    328         format(target_id, ".", name), value)
    329 else:
    330     raise Py4JError(
    331         "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332         format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o65.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 16) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:192)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:166)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
    at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
    at org.apache.spark.scheduler.Task.run(Task.scala:139)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.SocketTimeoutException: Accept timed out
    at java.base/sun.nio.ch.NioSocketImpl.timedAccept(NioSocketImpl.java:708)
    at java.base/sun.nio.ch.NioSocketImpl.accept(NioSocketImpl.java:752)
    at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:675)
    at java.base/java.net.ServerSocket.platformImplAccept(ServerSocket.java:641)
    at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:617)
    at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:574)
    at java.base/java.net.ServerSocket.accept(ServerSocket.java:532)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:179)
    ... 30 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2785)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2721)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2720)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2720)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1206)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1206)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1206)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2984)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2923)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2912)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:971)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2263)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2284)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2303)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:530)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:483)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:61)
    at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4177)
    at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:3161)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4167)
    at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4165)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4165)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:3161)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:3382)
    at org.apache.spark.sql.Dataset.getRows(Dataset.scala:284)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:323)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:568)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
    at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
    at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:192)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:166)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
    at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
    at org.apache.spark.scheduler.Task.run(Task.scala:139)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    ... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
    at java.base/sun.nio.ch.NioSocketImpl.timedAccept(NioSocketImpl.java:708)
    at java.base/sun.nio.ch.NioSocketImpl.accept(NioSocketImpl.java:752)
    at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:675)
    at java.base/java.net.ServerSocket.platformImplAccept(ServerSocket.java:641)
    at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:617)
    at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:574)
    at java.base/java.net.ServerSocket.accept(ServerSocket.java:532)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:179)
    ... 30 more

Below are my configurations:

  • Java 8.0.3710.11
  • JDK 17.0.7
  • Spark 3.4.1
  • Hadoop 3.0.0
  • pyspark 3.4.1

However, I can run read.csv succesfully, for example:

test= spark.read.csv('test.csv', header=True, sep='|')

Therefore, I cannot figure out the underlying problem.

Please let me know if extra information is required, thanks.

Michael
  • 21
  • 2

2 Answers2

0

I hope it works for your solution,

findspark adds pyspark to your sys.path at runtime

pip install findspark

Restart the kernel

import findspark 

findspark.init()
findspark.find()
from pyspark.sql import SparkSession

# Create a SparkSession object
spark = SparkSession.builder.appName("CreateDataFrame").getOrCreate()

# Use the SparkSession object to create a DataFrame
df_day_of_week = spark.createDataFrame([(0, "Sunday"), (1, "Monday"),
                                        (2, "Tuesday"), (3, "Wednesday"),
                                        (4, "Thursday"), (5, "Friday"),
                                        (6, "Saturday")],
                                       ["day_of_week_num", "day_of_week"])
# Show the DataFrame
df_day_of_week.show()
Muhammad Ali
  • 444
  • 7
  • 1
    Thanks so much, it works! Does that mean that my environment variable is set wrongly? I have set SPARK_HOME as C:\Spark\spark-3.4.1-bin-hadoop3 and Path as C:\Spark\spark-3.4.1-bin-hadoop3\bin – Michael Jul 23 '23 at 10:23
  • No, in my case without findspark its uses the pip install pyspark version of it. – Muhammad Ali Jul 23 '23 at 14:48
0

It shouldn't be surprising that both createDataFrame() and read.csv() don't give an error. The reason is that they are transformations, hence Spark is just saving them "for later" but not actually doing anything in accordance with the lazy evaluation paradigm.

You can see this for instance by changing the csv file after createDataFrame().

show() on the contrary is an action and this is where the Spark engine gets activated.

The relevant error message in your log is: "Python worker failed to connect back". This hints at some malconfiguration in your Spark architecture.

You will find some possible solutions in: Python worker failed to connect back

user2314737
  • 27,088
  • 20
  • 102
  • 114
  • Thanks for the information on PySpark architecture! Actually, I can execute show() after read.csv(), but executing show() after createDataFrame() prompts the above error, which does not make much sense to me – Michael Jul 23 '23 at 10:30