2

I am trying to run the script from this blog

import sys  
import json  
from pyspark import SparkContext  
from pyspark.streaming import StreamingContext  
def SaveRecord(rdd):  
    host = 'sparkmaster.example.com'  
    table = 'cats'  
    keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"  
    valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"  
    conf = {"hbase.zookeeper.quorum": host,  
        "hbase.mapred.outputtable": table,  
        "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",  
        "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",  
        "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}  
    datamap = rdd.map(lambda x: (str(json.loads(x)["id"]),[str(json.loads(x)["id"]),"cfamily","cats_json",x]))  
    datamap.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)  

if __name__ == "__main__":  
    if len(sys.argv) != 3:  
      print("Usage: StreamCatsToHBase.py <hostname> <port>")  
      exit(-1)  

    sc = SparkContext(appName="StreamCatsToHBase")  
    ssc = StreamingContext(sc, 1)  
    lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))  
    lines.foreachRDD(SaveRecord)  

    ssc.start()       # Start the computation  
    ssc.awaitTermination() # Wait for the computation to terminate

I am unable to run it. I have tried three different command line options but none is producing the output nor writing the data to hbase table

Here are the command line options that i tried

spark-submit --jars /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar --jars /usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2389 > sp_json.log

spark-submit --driver-class-path /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar sp_json.py localhost 2389 > sp_json.log

spark-submit --driver-class-path /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar --jars /usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2389 > sp_json.log

Here is the logfile. It is too verbose. It is one of the reasons that debugging is difficult in Apache spark because it spits out too much information.

user2065276
  • 313
  • 2
  • 16

1 Answers1

2

Finally got it working using the following command syntaxspark-submit --jars /usr/local/spark/lib/spark-examples-1.5.2-hadoop2.4.0.jar,/usr/local/hbase/lib/hbase-examples-1.1.2.jar sp_json.py localhost 2399 > sp_json.log

user2065276
  • 313
  • 2
  • 16