1

I use spark-submit to spark standalone cluster to execute my shaded jar, however the executor gets error:

22/12/06 15:21:25 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1) (10.37.2.77, executor 0, partition 0, PROCESS_LOCAL, 5133 bytes) taskResourceAssignments Map()
22/12/06 15:21:25 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on 10.37.2.77, executor 0: java.lang.ClassNotFoundException (org.apache.beam.runners.spark.io.SourceRDD$SourcePartition) [duplicate 1]
22/12/06 15:21:25 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2) (10.37.2.77, executor 0, partition 0, PROCESS_LOCAL, 5133 bytes) taskResourceAssignments Map()
22/12/06 15:21:25 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on 10.37.2.77, executor 0: java.lang.ClassNotFoundException (org.apache.beam.runners.spark.io.SourceRDD$SourcePartition) [duplicate 2]
22/12/06 15:21:25 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3) (10.37.2.77, executor 0, partition 0, PROCESS_LOCAL, 5133 bytes) taskResourceAssignments Map()
22/12/06 15:21:25 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on 10.37.2.77, executor 0: java.lang.ClassNotFoundException (org.apache.beam.runners.spark.io.SourceRDD$SourcePartition) [duplicate 3]
22/12/06 15:21:25 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
22/12/06 15:21:25 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
22/12/06 15:21:25 INFO TaskSchedulerImpl: Cancelling stage 0
22/12/06 15:21:25 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
22/12/06 15:21:25 INFO DAGScheduler: ResultStage 0 (collect at BoundedDataset.java:96) failed in 1.380 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.37.2.77 executor 0): java.lang.ClassNotFoundException: org.apache.beam.runners.spark.io.SourceRDD$SourcePartition
    at java.lang.ClassLoader.findClass(ClassLoader.java:523)
    at org.apache.spark.util.ParentClassLoader.findClass(ParentClassLoader.java:35)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at org.apache.spark.util.ParentClassLoader.loadClass(ParentClassLoader.java:40)
    at org.apache.spark.util.ChildFirstURLClassLoader.loadClass(ChildFirstURLClassLoader.java:48)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1988)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:458)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)

My request looks like:

 curl -X POST http://xxxxxx:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
  "appResource": "/home/xxxx/xxxx-bundled-0.1.jar",
  "sparkProperties": {
    "spark.master": "spark://xxxxxxx:7077",
    "spark.driver.userClassPathFirst": "true",
    "spark.executor.userClassPathFirst": "true",
    "spark.app.name": "DataPipeline",
    "spark.submit.deployMode": "cluster",
    "spark.driver.supervise": "true"
  },
  "environmentVariables": {
    "SPARK_ENV_LOADED": "1"
  },
  "clientSparkVersion": "3.1.3",
  "mainClass": "com.xxxx.DataPipeline",
  "action": "CreateSubmissionRequest",
  "appArgs": [
    "--config=xxxx",
    "--runner=SparkRunner"
  ]

I set "spark.driver.userClassPathFirst": "true", and "spark.executor.userClassPathFirst": "true" due to using proto3 in my jar. Not sure why this class is not found on the executor. My beam version 2.41.0, spark version 3.1.3, hadoop version 3.2.0.

Finally, I upgrade the shaded plugin to 3.4.0 and then the relocation for protobuf works and I deleted "spark.driver.userClassPathFirst": "true" and "spark.executor.userClassPathFirst": "true". Everything works after that. spark-submit locally or via rest api all works.

Oli
  • 9,766
  • 5
  • 25
  • 46
  • Please add the configuration you are using to build the shaded jar to the question. Also, have you relocated the classes? And how exactly are you submitting your code? Note, if using `userClassPathFirst` you have to carefully remove Spark, Hadoop, Scala classes (and many more) from your fat jar. – Moritz Dec 06 '22 at 13:48
  • 1. I've tried relocated the classes for protobuf3 but it seems not working so that I set userClassPathFirst=true and it works. 2. I first build the package of shaded jar then copy it to the standalone spark host, and then tried to run spark-submit there for cluster mode (and also tried to remotely call the rest api to submit the job as above). Both encounter the same issue. The client mode works fine. 3. By "remove" do you mean I change the scope to "provided" or "runtime"? – user20420926 Dec 06 '22 at 17:12
  • Thanks, after upgrade the shaded plugin to 3.4.0 the relocation works and everything works after that. – user20420926 Dec 06 '22 at 18:49
  • 1
    By removing I mean excluding those classes from the uber jar. If using `userClassPathFirst` that's critical, but it's always recommended doing that. Those classes already exist on the Spark classpath, see details here https://github.com/apache/beam/issues/23568#issuecomment-1286746306 – Moritz Dec 06 '22 at 21:37

1 Answers1

1

Finally, I upgrade the shaded plugin to 3.4.0 and then the relocation for protobuf works and I deleted "spark.driver.userClassPathFirst": "true" and "spark.executor.userClassPathFirst": "true". Everything works after that. "spark-submit" locally or via rest api all works.

Relocation for protobuf in shaded plugin:

<relocations>
    <relocation>
        <pattern>com.google.protobuf</pattern>
        <shadedPattern>shaded.com.google.protobuf</shadedPattern>
    </relocation>
</relocations>

Then following both invocations worked:

spark-submit --class XXXXPipeline --master spark://xxxx:7077 --deploy-mode cluster --supervise /xxxxx/xxxx-bundled-0.1.jar

and

curl -X POST http://xxxx:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
  "appResource": "xxxxx-bundled-0.1.jar",
  "sparkProperties": {
    "spark.master": "spark://XXXXXX:7077",
    "spark.jars": "xxxx-bundled-0.1.jar",
    "spark.app.name": "XXXPipeline",
    "spark.submit.deployMode": "cluster",
    "spark.driver.supervise": "true"
  },
  "environmentVariables": {
    "SPARK_ENV_LOADED": "1"
  },
  "clientSparkVersion": "3.1.3",
  "mainClass": "XXXXPipeline",
  "action": "CreateSubmissionRequest",
  "appArgs": []
}'
Azhar Khan
  • 3,829
  • 11
  • 26
  • 32