0

im new to pyspark and i tried to lunch pyspark standalone cluster .

  • i lunched the master using : bin\spark-class2.cmd org.apache.spark.deploy.master.Master
  • i lunched the worker using : bin\spark-class2.cmd org.apache.spark.deploy.worker.Worker -c 2 -m 2G spark://192.168.43.78:7077 spark://192.168.43.78:7077 is the URL of the master.
  • i lunched my code which is:
findspark.init('C:\spark\spark-3.0.3-bin-hadoop2.7')
conf=SparkConf()
conf.setMaster('spark://192.168.43.78:7077')
conf.setAppName('firstapp')
sc = SparkContext(conf=conf)
spark = SparkSession(sc)

and i got an error:

ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException.

sc = SparkContext(conf=conf) <---
ERROR AsyncEventQueue: Listener AppStatusListener threw an exception
java.lang.NullPointerException.

is there a way to fix that error ?

  • `Reason: All masters are unresponsive!`, is your master running upon `spark://192.168.43.78:7077` doing ok? – AminMal May 25 '22 at 09:32
  • yes i think, it works fine no errors on it or anything. it runs fine when i go to `localhost:8080` . – ab cosmoweb May 25 '22 at 15:25
  • So if it works fine on your local, there might be some network or connection issues or something related. Have you tried pinging the ip:port? (although I'm not sure about this, just guessing) – AminMal May 25 '22 at 17:25
  • Nope i haven't tried that, is there configurations of spark/conf files ? for exemple `spark-env.sh` and the other files so i can run the `standalone cluster` ? – ab cosmoweb May 25 '22 at 17:34

0 Answers0