0

I installed zeppelin in a local model and cluster model. They all installed and connected successfully. But the cluster model cannot process my code, despite the zeppelin examples. It started and was pending and running for a long time then resulted in this error every time:

java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds]

Then I open the log directory and open my zeppelin-interpreter-spark-pipeline-lls6.log. I paste the ERROR log info blow:

ERROR [2015-07-09 17:30:20,721] ({pool-1-thread-2} ProcessFunction.java[process]:41) - Internal error processing getProgress org.apache.zeppelin.interpreter.InterpreterException: java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds] at org.apache.zeppelin.interpreter.ClassloaderInterpreter.open(ClassloaderInterpreter.java:76) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:68) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getProgress(LazyOpenInterpreter.java:109) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.getProgress(RemoteInterpreterServer.java:297) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$getProgress.getResult(RemoteInterpreterService.java:938) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$getProgress.getResult(RemoteInterpreterService.java:923) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

The example bank-full.txt I moved to the hdfs directory.The same situation doesn't appear in the local model.

Our cluster is standalone.and all the versions are spark-1.3 hadoop-2.0.0-CDH-4.5.0. Under the conf I add the Master url. Has anyone encounter such situation and told me how to fix it.

Thanks all!

alex44jzy
  • 51
  • 1
  • 6

1 Answers1

0

It seems like my issue using EMR cluster with fixed IP. In the cluster model Hadoop,Spark cluster should be different from zeppelin server. MasterURL should be changed by,

export MASTER="spark://master_addr:7077"

and double-check binding interpreter into zeppelin server.

export SPARK_HOME=XXX
export SPARK_CONF_DIR=XXX
export HADOOP_HOME=XXX
export HADOOP_CONF_DIR=XXX
export SPARK_YARN_JAR=XXX
export SPARK_CLASSPATH=XXX
Kangrok Lee
  • 101
  • 13