0

I need to inject events saved to HDFS during online Kafka streaming back to DStream PySpark to undergo same algorithms processing. I found code example of Holden Karau that is "equivalent to a checkpointable, replayable, reliable message queue like Kafka". I wonder if it is possible to implement it in PySpark:

package com.holdenkarau.spark.testing
import org.apache.spark.streaming._
import org.apache.spark._
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._

import scala.language.implicitConversions
import scala.reflect.ClassTag
import org.apache.spark.streaming.dstream.FriendlyInputDStream

/**
* This is a input stream just for the testsuites. This is equivalent to a
* checkpointable, replayable, reliable message queue like Kafka.
* It requires a sequence as input, and returns the i_th element at the i_th batch
* under manual clock.
*
* Based on TestInputStream class from TestSuiteBase in the Apache Spark project.
*/

class TestInputStream[T: ClassTag](@transient var sc: SparkContext,
  ssc_ : StreamingContext, input: Seq[Seq[T]], numPartitions: Int)
  extends FriendlyInputDStream[T](ssc_) {

  def start() {}

  def stop() {}

  def compute(validTime: Time): Option[RDD[T]] = {
    logInfo("Computing RDD for time " + validTime)
    val index = ((validTime - ourZeroTime) / slideDuration - 1).toInt
    val selectedInput = if (index < input.size) input(index) else Seq[T]()

    // lets us test cases where RDDs are not created
    Option(selectedInput).map{si =>
      val rdd = sc.makeRDD(si, numPartitions)
      logInfo("Created RDD " + rdd.id + " with " + selectedInput)
      rdd
    }
  }
}
Alper t. Turker
  • 34,230
  • 9
  • 83
  • 115
Alex
  • 49
  • 4

1 Answers1

0

Spark provides two built-in DStream implementations which can be used for testing and in majority of cases you don't need any external one.

The second one, in a simplified form, is available in PySpark - pyspark.streaming.StreamingContext.queueStream:

ssc = StreamingContext(sc)
ssc.queueStream([
    sc.range(0, 1000),
    sc.range(1000, 2000),
    sc.range(2000, 3000)
])

If it is not enough, you can always use a new thread to atomically write serialized data to a file system, and read it from there, using standard file-based DStream.

Alper t. Turker
  • 34,230
  • 9
  • 83
  • 115
  • queueStream is not good because it does not have support for checkpoints I cannot simulate a stateful stream with it. The challenge I have is to retrieve events that were in the stream and feed them back as it were a stream. And on top of that the processing results: stream and batch should be the same. – Alex Apr 15 '18 at 13:07
  • Can you elaborate what is it and how to use file-based DStream? – Alex Apr 15 '18 at 14:58
  • I mean `textInputStream` or equivalent. Then you just put files to local file system, and wait for Spark for read them. Requires a bit of code, but not that much. – Alper t. Turker Apr 16 '18 at 14:14