2

I'm learning Spark and trying to build a simple streaming service.

For e.g. I have a Kafka queue and a Spark job like words count. That example is using a stateless mode. I'd like to accumulate words counts so if test has been sent a few times in different messages I could get a total number of all its occurrences.

Using other examples like StatefulNetworkWordCount I've tried to modify my Kafka streaming service

val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(2))

ssc.checkpoint("/tmp/data")

// Create direct kafka stream with brokers and topics
val topicsSet = topics.split(",").toSet
val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicsSet)

// Get the lines, split them into words, count the words and print
val lines = messages.map(_._2)
val words = lines.flatMap(_.split(" "))

val wordDstream = words.map(x => (x, 1))

// Update the cumulative count using mapWithState
// This will give a DStream made of state (which is the cumulative count of the words)
val mappingFunc = (word: String, one: Option[Int], state: State[Int]) => {
  val sum = one.getOrElse(0) + state.getOption.getOrElse(0)
  val output = (word, sum)
  state.update(sum)
  output
}

val stateDstream = wordDstream.mapWithState(
  StateSpec.function(mappingFunc) /*.initialState(initialRDD)*/)

stateDstream.print()

stateDstream.map(s => (s._1, s._2.toString)).foreachRDD(rdd => sc.toRedisZSET(rdd, "word_count", 0))

// Start the computation
ssc.start()
ssc.awaitTermination()

I get a lot of errors like

17/03/26 21:33:57 ERROR streaming.StreamingContext: Error starting the context, marking it as stopped
java.io.NotSerializableException: DStream checkpointing has been enabled but the DStreams with their functions are not serializable
org.apache.spark.SparkContext
Serialization stack:
    - object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext@2b680207)
    - field (class: com.DirectKafkaWordCount$$anonfun$main$2, name: sc$1, type: class org.apache.spark.SparkContext)
    - object (class com.DirectKafkaWordCount$$anonfun$main$2, <function1>)
    - field (class: org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3, name: cleanedF$1, type: interface scala.Function1)

though the stateless version works fine without errors

val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(2))

// Create direct kafka stream with brokers and topics
val topicsSet = topics.split(",").toSet
val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
  ssc, kafkaParams, topicsSet)

// Get the lines, split them into words, count the words and print
val lines = messages.map(_._2)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _).map(s => (s._1, s._2.toString))
wordCounts.print()

wordCounts.foreachRDD(rdd => sc.toRedisZSET(rdd, "word_count", 0))

// Start the computation
ssc.start()
ssc.awaitTermination()

The question is how to make the streaming stateful word count.

kikulikov
  • 2,512
  • 4
  • 29
  • 45
  • do you actually need checkpointing? you could fix it by removing `ssc.checkpoint("/tmp/data")` line, see [explanation](https://forums.databricks.com/questions/382/why-is-my-spark-streaming-application-throwing-a-n.html) – dk14 Mar 27 '17 at 02:55
  • 2
    Since [Spark 2.2.0](http://spark.apache.org/news/spark-2-2-0-released.html) was released you should seriously consider Structured Streaming to build a [stateful stream processing](http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#arbitrary-stateful-operations) using Spark (as described also in [Faster Stateful Stream Processing in Apache Spark Streaming](https://databricks.com/blog/2016/02/01/faster-stateful-stream-processing-in-apache-spark-streaming.html)). – Jacek Laskowski Jul 16 '17 at 10:40
  • @JacekLaskowski Thank you! I'll check it. – kikulikov Jul 30 '17 at 21:22

1 Answers1

0

At this line:

ssc.checkpoint("/tmp/data")

you've enabled checkpointing, which means everything in your:

wordCounts.foreachRDD(rdd => sc.toRedisZSET(rdd, "word_count", 0))

has to be serializable, and sc itself is not, as you can see from the error message:

object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext@2b680207)

Removing checkpointing code line will help with that.

Another way is to either continuously compute your DStream into RDD or write data directly to redis, something like:

wordCounts.foreachRDD{rdd => 
  rdd.foreachPartition(partition => RedisContext.setZset("word_count", partition, ttl, redisConfig)
}

RedisContext is a serializable object that doesn't depend on SparkContext

See also: https://github.com/RedisLabs/spark-redis/blob/master/src/main/scala/com/redislabs/provider/redis/redisFunctions.scala

dk14
  • 22,206
  • 4
  • 51
  • 88