12

I want to enable single cluster in Apache Spark, I installed java and scala. I downloaded the spark for Apache Hadoop 2.6 and unpacked. I'm trying to turn the spark-shell but throws me an error, in addition, I do not have access to sc in shell. I compiled from source but the same error. What am I doing wrong?

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.3.1
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
java.net.BindException: Failed to bind to: ADMINISTRATOR.home/192.168.1.5:0: Service 'sparkDriver' failed after 16 retries!
 at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
 at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
 at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
 at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
 at scala.util.Try$.apply(Try.scala:161)
 at scala.util.Success.map(Try.scala:206)
 at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
 at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
 at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
 at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
 at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
 at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
 at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

java.lang.NullPointerException
 at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:145)
 at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027)
 at $iwC$$iwC.<init>(<console>:9)
 at $iwC.<init>(<console>:18)
 at <init>(<console>:20)
 at .<init>(<console>:24)
 at .<clinit>(<console>)
 at .<init>(<console>:7)
 at .<clinit>(<console>)
 at $print(<console>)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
 at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
 at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
 at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
 at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
 at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
 at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:130)
 at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
 at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
 at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
 at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973)
 at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
 at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
 at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
 at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
 at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
 at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
 at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
 at org.apache.spark.repl.Main$.main(Main.scala:31)
 at org.apache.spark.repl.Main.main(Main.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

<console>:10: error: not found: value sqlContext
       import sqlContext.implicits._
              ^
<console>:10: error: not found: value sqlContext
       import sqlContext.sql
              ^

scala> 
Mateusz
  • 199
  • 1
  • 2
  • 12
  • Can you please add the text form of exception? What are arguments of command `spark-shell.bat`? Did you tried `spark-shell.bat --master local[*]` or what? – Dawid Pura May 06 '15 at 19:26
  • with -Master tried but nothing: Próbowałem z atrybutem -master ale nie idzie – Mateusz May 06 '15 at 19:27
  • You should edit the question with additional info of problem and exception. – Dawid Pura May 06 '15 at 19:28
  • if it's not working please add additional info why it isn't working. Try to run **exactly** `spark-shell.bat --master local[*]` and paste the output into your question. – Dawid Pura May 06 '15 at 19:32
  • How to copy text from `cmd`: https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/windows_dos_copy.mspx?mfr=true I can't see what command are you running. – Dawid Pura May 06 '15 at 19:45
  • I running with spark-shell.cmd --master local[*] – Mateusz May 06 '15 at 19:47
  • Run with cmd administrator privileges. – Dawid Pura May 06 '15 at 20:36
  • with administrator rights also did not help with the command spark-shell.cmd --master local [*] – Mateusz May 06 '15 at 20:46

3 Answers3

27

I've just begun to learn Spark, and I hope run Spark in local mode. I met a problem like yours. The problem:

java.net.BindException: Failed to bind to: /124.232.132.94:0: Service 'sparkDriver' failed after 16 retries!

Because I just wanted to run Spark in local mode, I found a solution to solve this problem. The solution: edit the file spark-env.sh (you can find it in your $SPARK_HOME/conf/) and add this into the file:

export  SPARK_MASTER_IP=127.0.0.1
export  SPARK_LOCAL_IP=127.0.0.1

After that my Spark works fine in local mode. I hope this can help you! :)

mike
  • 286
  • 3
  • 4
  • This fixed it for me. It was the weirdest problem because it worked fine one minute than threw that error the next without having modified any part of my code. Maybe that address was used by another process? – Christophe Sep 29 '15 at 15:37
  • thanks, that sorted the same issue I was having for Spark on Ubuntu – martino Apr 12 '16 at 21:53
4

Above solution did not work for me. I followed these steps: How to start Spark applications on Windows (aka Why Spark fails with NullPointerException)?

and changed HADOOP_HOME environment variable in system variable. It worked for me.

Community
  • 1
  • 1
kjosh
  • 158
  • 2
  • 13
0

It might be ownership issue as well

hadoop fs -chown -R deepdive:root /user/deepdive/

deepdive
  • 9,720
  • 3
  • 30
  • 38