2

enter image description hereI am using Spark Streaming 1.6.1 with Kafka0.9.0.1 (createStreams API) HDP 2.4.2, My use case sends large messages to Kafka Topics ranges from 5MB to 30 MB in such cases Spark Streaming fails to complete its job and crashes with below exception.I am doing a dataframe operation and saving on HDFS in csv format, below is my code snippet

Reading from Kafka Topic:    
 val lines = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicMap, StorageLevel.MEMORY_AND_DISK_SER_2/*MEMORY_ONLY_SER_2*/).map(_._2)

 Writing on HDFS:     
 val hdfsDF: DataFrame = getDF(sqlContext, eventDF, schema,topicName)
      hdfsDF.show
      hdfsDF.write
        .format("com.databricks.spark.csv")
        .option("header", "false")
        .save(hdfsPath + "/" + "out_" + System.currentTimeMillis().toString())

16/11/11 12:12:35 WARN ReceiverTracker: Error reported by receiver for stream 0: Error handling message; exiting - java.lang.OutOfMemoryError: Java heap space
    at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
    at java.lang.StringCoding.decode(StringCoding.java:193)
    at java.lang.String.<init>(String.java:426)
    at java.lang.String.<init>(String.java:491)
    at kafka.serializer.StringDecoder.fromBytes(Decoder.scala:50)
    at kafka.serializer.StringDecoder.fromBytes(Decoder.scala:42)
    at kafka.message.MessageAndMetadata.message(MessageAndMetadata.scala:32)
    at org.apache.spark.streaming.kafka.KafkaReceiver$MessageHandler.run(KafkaInputDStream.scala:137)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Followed by :

 java.lang.Exception: Could not compute split, block input-0-1478610837000 not found
at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
nilesh1212
  • 1,561
  • 2
  • 26
  • 60
  • Most of the time it is throwing below exception. – nilesh1212 Nov 11 '16 at 11:34
  • Did you find a solution to this issue? – AnswerSeeker Mar 31 '17 at 14:11
  • I was able to solve this in my case by reducing number of partitions in Kafka. I had to remove the topic and create it again. I had 3 kafka brokers and 4 partitions for my topic and I had files of similar size that you mentioned coming in continuously. Due to 4 partitions, I had 4 threads running which pushed the files which spark stream cannot handle. Changing partition to 2 did the trick. I am assuming you have set the kafka max message size property to match with your max file size as well. If you are still pursuing this issue, I hope this helps – AnswerSeeker Mar 31 '17 at 17:16
  • @Rag, Yes It got fixed by setting max message size on Kafka side and maxPartionRate on spark streaming side. – nilesh1212 Apr 03 '17 at 13:00

0 Answers0