8

I downloaded: spark-2.1.0-bin-hadoop2.7.tgz from http://spark.apache.org/downloads.html. I have Hadoop HDFS and YARN started with $ start-dfs.sh and $ start-yarn.sh. But running $ spark-shell --master yarn --deploy-mode client gives me the error below:

    $ spark-shell --master yarn --deploy-mode client
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/04/08 23:04:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/08 23:04:54 WARN util.Utils: Your hostname, Pandora resolves to a loopback address: 127.0.1.1; using 192.168.1.11 instead (on interface wlp3s0)
17/04/08 23:04:54 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
17/04/08 23:04:56 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/04/08 23:05:15 ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
17/04/08 23:05:15 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
    at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
    at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:567)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
    at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
    at $line3.$read$$iw$$iw.<init>(<console>:15)
    at $line3.$read$$iw.<init>(<console>:42)
    at $line3.$read.<init>(<console>:44)
    at $line3.$read$.<init>(<console>:48)
    at $line3.$read$.<clinit>(<console>)
    at $line3.$eval$.$print$lzycompute(<console>:7)
    at $line3.$eval$.$print(<console>:6)
    at $line3.$eval.$print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
    at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
    at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
    at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
    at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
    at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:38)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
    at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:37)
    at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:37)
    at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:105)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:920)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
    at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
    at org.apache.spark.repl.Main$.doMain(Main.scala:68)
    at org.apache.spark.repl.Main$.main(Main.scala:51)
    at org.apache.spark.repl.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/04/08 23:05:15 ERROR client.TransportClient: Failed to send RPC 7918328175210939600 to /192.168.1.11:56186: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
17/04/08 23:05:15 ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map()) to AM was unsuccessful
java.io.IOException: Failed to send RPC 7918328175210939600 to /192.168.1.11:56186: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:249)
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:233)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
    at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
    at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:438)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
17/04/08 23:05:15 ERROR util.Utils: Uncaught exception in thread Yarn application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:512)
    at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:93)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:151)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:467)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1588)
    at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1826)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1825)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:108)
Caused by: java.io.IOException: Failed to send RPC 7918328175210939600 to /192.168.1.11:56186: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:249)
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:233)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
    at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
    at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:438)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
java.lang.IllegalStateException: Spark context stopped while waiting for backend
  at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
  at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:567)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
  ... 47 elided
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

YARN detects Spark is running with it, but the error is causing Spark to exit with undefined status.

enter image description here

Dobob
  • 718
  • 1
  • 12
  • 26
  • Spark does not require HDFS or YARN unless you specifically configured it to do so – OneCricketeer Apr 07 '17 at 02:38
  • @cricket_007 I am pretty sure I haven't configured anything specifically to have it us YARN. The tutorial I followed set up the configuration files and then requires specifying the flags to run spark on YARN. Here is the tutorial I followed, I tried different configurations and still doesn't work: http://why-not-learn-something.blogspot.com/2015/06/spark-installation-pseudo.html – Dobob Apr 07 '17 at 05:38
  • Spark 1.3 is old... Why do you need YARN or HDFS (or hadoop at all) – OneCricketeer Apr 07 '17 at 06:34
  • @cricket_007 I have downloaded fresh spark tar file. Extracted it, and I can't run $spark-shell without errors without first starting $start-dfs.sh. Which means, at least the newer spark, requires HDFS. I don't see any other reason. – Dobob Apr 09 '17 at 02:49
  • 1
    HDFS is only required for reading from HDFS. YARN is only required when running Spark on YARN. Without either, you can run Spark locally using the Standalone scheduler that is built-in and read from local filesystem. You should try `spark-2.1.0-bin-without-hadoop.tgz` – OneCricketeer Apr 09 '17 at 02:53
  • @cricket_007 I actually downloaded the version: spark-2.1.0-bin-hadoop2.7.tgz. So that probably explains why I need to start Hadoop HDFS first. That is the default file they show you when you go to the Apache Spark download page. – Dobob Apr 09 '17 at 03:01
  • Yes, I know, but you can download the one without Hadoop, then if you actually **need** Hadoop, you need to download from http://hadoop.apache.org separately – OneCricketeer Apr 09 '17 at 03:03
  • @cricket_007 If I downloaded the version of Spark that requires Hadoop, then does that mean I should replace "$ spark-shell --master yarn --deploy-mode client" with "$spark-shell" because it already expects HDFS and YARN to be running? I need to use Spark with Hadoop for my project. – Dobob Apr 09 '17 at 03:36
  • You would need to inspect `spark-env.sh` to know if the master is always set to YARN (which I doubt), or if the deploy mode is the client mode (which it isn't)... You need the CLI arguments, but it still is unclear to me why you need psuedo-distributed hadoop. If you want Spark shell to work, download the one without Hadoop! If you need Hadoop, then read the documentation how to configure Spark for it (you'll learn the proper steps that way) – OneCricketeer Apr 09 '17 at 03:46

2 Answers2

20

I found the solution from another Stackoverflow question. It was not about configuring Apache Spark, it was about configuring Hadoop YARN:

Running yarn with spark not working with Java 8

Make sure your yarn-site.xml, from your Hadoop configuration folder, has these properties:

<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>

<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>
Community
  • 1
  • 1
Dobob
  • 718
  • 1
  • 12
  • 26
1

I met the same problem with you. When I check the NodeManager log,I find this warn:

2017-10-26 19:43:21,787 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=3820,containerID=container_1509016963775_0001_02_000001] is running beyond virtual memory limits. Current usage: 339.0 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.

So I set a bigger virtual memory(yarn.nodemanager.vmem-pmem-ratio in yarn-site.xml, which default value is 2.1). Then it really worked.

xin
  • 11
  • 1