I am trying to run spark over hadoop(yarn).
when I try to run spark-shell it makes ConnectionRefused exception.the log is like this:
ERROR cluster.YarnClientSchedulerBackend: The YARN application has already ended! It might have been
killed or the Application Master may have failed to start. Check the YARN application logs for more details.
19/10/22 15:13:36 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Application application_1571690458300_0002 failed 2 times due to
Error launching appattempt_1571690458300_0002_000002. Got exception: java.net.ConnectException:
Call From master/x.x.x.x(a correct IP:) ) to ubuntu:43856 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop
/ConnectionRefused
but when I run wordcount example with yarn every thing is ok.(so the yarn is ok, I think)
as the log said it's try to connect to ubuntu:43856, I think it's try to connect to one of my slaves
that it should be slave1:43856(as I set the the workers file). and I think the problem is here but
running yarn alone(without spark) is ok.
and the export of yarn node -list
command is :
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
ubuntu:43856 RUNNING ubuntu:8042 0
ubuntu:37951 RUNNING ubuntu:8042 0
ubuntu:34335 RUNNING ubuntu:8042 0
ubuntu:46500 RUNNING ubuntu:8042 0
there are a lot config files, let me know if one(or more) of that files needed.
thanks in advance.