0

I wrote some example code which connect to kafka broker, read data from topic and sink it to snappydata table.

from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext, Row, SparkSession
from pyspark.sql.snappy import SnappySession 
from pyspark.rdd import RDD
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.functions import col, explode, split
import time
import sys


def main(snappy):
    logger = logging.getLogger('py4j')
    logger.info("My test info statement")


    sns = snappy.newSession()
    df = sns \
    .readStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "10.0.0.4:9092") \
    .option("subscribe", "test_import3") \
    .option("failOnDataLoss", "false") \
    .option("startingOffsets", "latest") \
    .load()
    bdf = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")

    streamingQuery = bdf\
    .writeStream\
    .format("snappysink") \
    .queryName("Devices3") \
    .trigger(processingTime="30 seconds") \
    .option("tablename","devices2") \
    .option("checkpointLocation","/tmp") \
    .start()

    streamingQuery.awaitTermination()


if __name__ == "__main__":
    from pyspark.sql.snappy import SnappySession
    from pyspark import SparkContext, SparkConf
    
    sc = SparkSession.builder.master("local[*]").appName("test").config("snappydata.connection", "10.0.0.4:1527").getOrCreate()
    snc = SnappySession(sc)
    main(snc)
    

I`m submitting it with command

/opt/snappydata/bin/spark-submit --master spark://10.0.0.4:1527 /path_to/file.py --conf snappydata.connection=10.0.0.4:1527

Everything works, data is readed from Kafka Topic and writed in snappydata table. I don't understand why i don't see this streaming query in the SnappyData dashboard UI - after submitting pyspark code in the console i saw new Spark Master UI its started.

How can i connect to SnappyData internal Spark Master from pySpark it is possible?

tazonee
  • 13
  • 3
  • Try https://snappydatainc.github.io/snappydata/programming_guide/spark_jdbc_connector/ – mck Nov 30 '20 at 14:08

1 Answers1

0

SnappyData supports Python jobs to be submitted only in Smart Connector mode, which means it'll always be launched via a separate Spark Cluster to talk to SnappyData cluster. Hence, you see that your Python job is seen on this Spark cluster's UI and not on SnappyData's dashboard.

Amogh
  • 136
  • 5