4

I am trying to implement spark streaming application in scala. I want to use fileStream() method to process newly arrived files as well as older files present in hadoop directory.

I have followed fileStream() implementation from following two threads from stackoverflow as:

I am using fileStream() as following:

val linesRDD = ssc.fileStream[LongWritable, Text, TextInputFormat](inputDirectory, (t: org.apache.hadoop.fs.Path) => true, false).map(_._2.toString)

But i am getting error message as following:

type arguments [org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,
org.apache.hadoop.mapred.TextInputFormat] conform to the bounds of none of the overloaded alternatives of value fileStream: [K, V, F <: org.apache.hadoop.mapreduce.InputFormat[K,V]](directory: String, filter: org.apache.hadoop.fs.Path ⇒ Boolean, newFilesOnly: Boolean, conf: org.apache.hadoop.conf.Configuration)(implicit evidence$12: scala.reflect.ClassTag[K], implicit evidence$13: scala.reflect.ClassTag[V], implicit evidence$14: scala.reflect.ClassTag[F])
org.apache.spark.streaming.dstream.InputDStream[(K, V)] <and> 
[K, V, F <: org.apache.hadoop.mapreduce.InputFormat[K,V]](directory:
String, filter: org.apache.hadoop.fs.Path ⇒ Boolean, newFilesOnly: Boolean)(implicit evidence$9: scala.reflect.ClassTag[K], implicit evidence$10: scala.reflect.ClassTag[V], 
implicit evidence$11: scala.reflect.ClassTag[F])
org.apache.spark.streaming.dstream.InputDStream[(K, V)] <and> [K, V, F <: org.apache.hadoop.mapreduce.InputFormat[K,V]](directory: String)(implicit evidence$6: scala.reflect.ClassTag[K], implicit evidence$7: scala.reflect.ClassTag[V], implicit evidence$8: scala.reflect.ClassTag[F])
org.apache.spark.streaming.dstream.InputDStream[(K, V)]

wrong number of type parameters for overloaded method value fileStream with alternatives: 
[K, V, F <: org.apache.hadoop.mapreduce.InputFormat[K,V]](directory: String, filter: org.apache.hadoop.fs.Path ⇒ Boolean, newFilesOnly: Boolean, conf: org.apache.hadoop.conf.Configuration)(implicit evidence$12: scala.reflect.ClassTag[K], implicit evidence$13: scala.reflect.ClassTag[V], implicit evidence$14: scala.reflect.ClassTag[F])
org.apache.spark.streaming.dstream.InputDStream[(K, V)] <and> [K, V, F <:     org.apache.hadoop.mapreduce.InputFormat[K,V]](directory: String, filter: org.apache.hadoop.fs.Path ⇒ Boolean, newFilesOnly: Boolean)(implicit evidence$9: scala.reflect.ClassTag[K], implicit evidence$10: scala.reflect.ClassTag[V], implicit evidence$11: scala.reflect.ClassTag[F])
org.apache.spark.streaming.dstream.InputDStream[(K, V)] <and> 
[K, V, F <: org.apache.hadoop.mapreduce.InputFormat[K,V]](directory: String)(implicit evidence$6: scala.reflect.ClassTag[K], implicit evidence$7: scala.reflect.ClassTag[V], implicit evidence$8: scala.reflect.ClassTag[F])
org.apache.spark.streaming.dstream.InputDStream[(K, V)] 

I am using spark 1.4.1 and hadoop 2.7.1. Before posting this question i have looked different implementation discussed over stackoverflow and also spark docs but nothing helped me. Any help would be appreciated.

Thanks Rajneesh.

Community
  • 1
  • 1
Rajneesh Kumar
  • 167
  • 1
  • 13

1 Answers1

4

Please find below the sample java code, with correct imports, its working fine for me

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;

JavaStreamingContext jssc = SparkUtils.getStreamingContext("key", jsc);
//      JavaDStream<String> rawInput = jssc.textFileStream(inputPath);

        JavaPairInputDStream<LongWritable, Text> inputStream = jssc.fileStream(
                inputPath, LongWritable.class, Text.class,
                TextInputFormat.class, new Function<Path, Boolean>() {
                    @Override
                    public Boolean call(Path v1) throws Exception {
                        if ( v1.getName().contains("COPYING") ) {
                            // This eliminates staging files.
                            return Boolean.FALSE;
                        }
                        return Boolean.TRUE;
                    }
                }, true);
        JavaDStream<String> rawInput = inputStream.map(
                  new Function<Tuple2<LongWritable, Text>, String>() {
                    @Override
                    public String call(Tuple2<LongWritable, Text> v1) throws Exception {
                      return v1._2().toString();
                    }
                });
        log.info(tracePrefix + "Created the stream, Window Interval: " + windowInterval + ", Slide interval: " + slideInterval);
        rawInput.print();
Lokesh Kumar P
  • 369
  • 5
  • 20
  • 1
    Thank Lokesh. It is working for me as well. I was importing wrong package for TextInputFormat as: import org.apache.hadoop.mapred.TextInputFormat; but the correct is: import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; I need one more help from you. By referring you question: http://stackoverflow.com/questions/29935732/spark-streaming-textfilestream-filestream-get-file-name; you were trying to get processed file name by spark. can you guide me on that? I am trying to get file names which are processed by spark so that i can delete/move them to different directory. – Rajneesh Kumar Oct 17 '15 at 19:55
  • Hi, unfortunately spark RDDs does not provide a API mechanism to fetch the fileNames, the only advice I got from stackoverflow is to print the RDD as debugString and parse the fileNames from that, but it is a pretty dirty hack. – Lokesh Kumar P Oct 19 '15 at 09:49
  • 1
    Not to dig up an old problem, but answer here seems to work , though not sure of performance issues with it. http://stackoverflow.com/a/40245068/213816 – OneCricketeer Jan 27 '17 at 00:34