2

I am running my Spark job by using livy , however , I get below exception

java.util.concurrent.ExecutionException: java.io.IOException: Internal Server Error: "java.util.concurrent.ExecutionException: org.apache.livy.rsc.rpc.RpcException: java.util.NoSuchElementException: cd1299a0-9c19-4db2-b81b-deba9bf5a594\norg.apache.livy.rsc.driver.RSCDriver.handle(RSCDriver.java:454)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\njava.lang.reflect.Method.invoke(Method.java:497)\norg.apache.livy.rsc.rpc.RpcDispatcher.handleCall(RpcDispatcher.java:130)\norg.apache.livy.rsc.rpc.RpcDispatcher.channelRead0(RpcDispatcher.java:77)\nio.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)\nio.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)\nio.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)\nio.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)\nio.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)\nio.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)\nio.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)\nio.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103)\nio.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)\nio.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)\nio.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)\nio.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)\nio.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)\nio.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)\nio.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)\nio.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)\nio.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)\njava.lang.Thread.run(Thread.java:745)"

looking at Spark jobs logs , there is no error or exception in them but it shows

In spark logs, I don't see any exception but I see below logs:

17/11/14 14:45:49 WARN NettyRpcEnv: RpcEnv already stopped.
17/11/14 14:45:49 INFO YarnAllocator: Completed container container_e60_1510219626098_0394_01_000013 on host: AUPER01-02-10-12-0.prod.vroc.com.au (state: COMPLETE, exit status: 0)
17/11/14 14:45:49 INFO YarnAllocator: Executor for container container_e60_1510219626098_0394_01_000013 exited because of a YARN event (e.g., pre-emption) and not because of an error in the running job.
17/11/14 14:45:49 WARN NettyRpcEnv: RpcEnv already stopped.
17/11/14 14:45:49 INFO YarnAllocator: Completed container container_e60_1510219626098_0394_01_000011 on host: AUPER01-01-10-13-0.prod.vroc.com.au (state: COMPLETE, exit status: 0)
17/11/14 14:45:49 INFO YarnAllocator: Executor for container container_e60_1510219626098_0394_01_000011 exited because of a YARN event (e.g., pre-emption) and not because of an error in the running job.
17/11/14 14:45:49 WARN NettyRpcEnv: RpcEnv already stopped.
17/11/14 14:45:49 INFO YarnAllocator: Completed container container_e60_1510219626098_0394_01_000005 on host: AUPER01-01-20-08-0.prod.vroc.com.au (state: COMPLETE, exit status: 0)
17/11/14 14:45:49 INFO YarnAllocator: Executor for container container_e60_1510219626098_0394_01_000005 exited because of a YARN event (e.g., pre-emption) and not because of an error in the running job.
17/11/14 14:45:49 WARN NettyRpcEnv: RpcEnv already stopped.
17/11/14 14:45:49 INFO YarnAllocator: Completed container container_e60_1510219626098_0394_01_000008 on host: AUPER01-02-30-12-1.prod.vroc.com.au (state: COMPLETE, exit status: 0)
17/11/14 14:45:49 INFO YarnAllocator: Executor for container container_e60_1510219626098_0394_01_000008 exited because of a YARN event (e.g., pre-emption) and not because of an error in the running job.

I am runing Spark 1.6.3 jobs on HDP 2.6.3

Luckylukee
  • 575
  • 2
  • 9
  • 27
  • Anything new on that? I've got exactly the same problem, after upgrading my cluster to HDP 2.6.3 and I'm also trying to connect to Spark 1.6.3! – D. Müller Dec 15 '17 at 10:09
  • This compatibility matrix shows that HDP 2.6.3 and Spark 1.6.3 are compatible to Livy 0.3.0, but Livy 0.4.0 was installed by the HDP upgrade: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_spark-component-guide/content/ch_introduction-spark.html Can this be a reason for the issue? – D. Müller Dec 18 '17 at 10:33

1 Answers1

1

It may be caused by version incompatible problem of livy libs. You can download the right version of livy api lib matching your hdp version. Reference: https://community.hortonworks.com/questions/147936/livy-job-failing-after-upgrading-cluster-to-hdp-26.html

Libs download link: http://repo.hortonworks.com/content/repositories/releases/org/apache/livy

Junfeng
  • 93
  • 9