0

I have a 6 node cluster, one of those is spark enabled.

I also have a spark job that I would like to submit to the cluster / that node, so I enter the following command

spark-submit --class VDQConsumer --master spark://node-public-ip:7077 target/scala-2.10/vdq-consumer-assembly-1.0.jar

it launches the spark ui on that node, but eventually gets here:

15/05/14 14:19:55 INFO SparkContext: Added JAR file:/Users/cwheeler/dev/git/vdq-consumer/target/scala-2.10/vdq-consumer-assembly-1.0.jar at http://node-ip:54898/jars/vdq-consumer-assembly-1.0.jar with timestamp 1431627595602
15/05/14 14:19:55 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@node-ip:7077/user/Master...
15/05/14 14:19:55 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@node-ip:7077] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
15/05/14 14:20:15 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@node-ip:7077/user/Master...
15/05/14 14:20:35 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@node-ip:7077/user/Master...
15/05/14 14:20:55 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/14 14:20:55 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
15/05/14 14:20:55 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.

Does anyone have any idea what just happened?

030
  • 10,842
  • 12
  • 78
  • 123
lostinplace
  • 1,538
  • 3
  • 14
  • 38
  • Did you set an master ip in your VDQConsumer using `setMaster()` and not a valid master ip? Seems your Driver cannot talk to your master and stopped then. – yjshen May 15 '15 at 08:19
  • does setMaster need to be used inside the app if it was set using the spark-submit argument? Right now the app doesn't set the master at all, it relies on the host process to set the master – lostinplace May 15 '15 at 13:47

0 Answers0