1

i am trying to get distinct values for a column based on groupby operation on other column using pyspark stream, but i am getting in correct count.

Function created:

from pyspark.sql.functions import weekofyear,window,approx_count_distinct

def silverToGold(silverPath, goldPath, queryName):
  (spark.readStream
  .format("delta")
  .load(silverPath)
  .withColumn("week",weekofyear("eventDate"))
  #.groupBy(window(col(("week")).cast("timestamp"),"5 minute")).approx_count_distinct("device_id")
 # .withColumn("WAU",col("window.start"))
 # .drop("window") 
  .groupBy("week").agg(approx_distinct.count("device_id").alias("WAU"))
  .writeStream
  .format("delta")
  .option("checkpointLocation",goldPath + "/_checkpoint")
  #.option("streamName",queryName)
  .queryName(queryName)
  .outputMode("complete")
  .start(goldPath)  
  #return queryName  
  )

Expected Result:

week WAU
1    7
2    4
3    9
4    9

Actual Result:

week WAU
1    7259
2    7427
3    7739
4    7076

Sample Input Data:

silverPath

Input data in text format:

device_id,eventName,client_event_time,eventDate,deviceType 00007d948fbe4d239b45fe59bfbb7e64,scoreAdjustment,2018-06-01T16:55:40.000+0000,2018-06-01,android 00007d948fbe4d239b45fe59bfbb7e64,scoreAdjustment,2018-06-01T16:55:34.000+0000,2018-06-01,android 0000a99151154e4eb14c675e8b42db34,scoreAdjustment,2019-08-18T13:39:36.000+0000,2019-08-18,ios 0000b1e931d947b197385ac1cbb25779,scoreAdjustment,2018-07-16T09:13:45.000+0000,2018-07-16,android 0003939e705949e4a184e0a853b6e0af,scoreAdjustment,2018-07-17T17:59:05.000+0000,2018-07-17,android 0003e14ca9ba4198b51cec7d2761d391,scoreAdjustment,2018-06-10T09:09:12.000+0000,2018-06-10,ios 00056f7c73c9497180f2e0900a0626e3,scoreAdjustment,2019-07-05T18:31:10.000+0000,2019-07-05,ios 0006ace2d1db46ba94b802d80a43c20f,scoreAdjustment,2018-07-05T14:31:43.000+0000,2018-07-05,ios 000718c45e164fb2b017f146a6b66b7e,scoreAdjustment,2019-03-26T08:25:08.000+0000,2019-03-26,android 000807f2ea524bd0b7e27df8d44ab930,purchaseEvent,2019-03-26T22:28:17.000+0000,2019-03-26,android

Any suggestions on this

Prabhanj
  • 262
  • 2
  • 3
  • 16
  • Can you change it to a batch query and show us an input data to work with? Why do you use `approx_distinct.count` not `count`? – Jacek Laskowski Jun 21 '20 at 14:41
  • Since it is streaming one, to get distinct count i have used approx_distinct.count. Can you provide any reference to use count() to get distinct of device_id as distinct count can't used on streaming one. – Prabhanj Jun 23 '20 at 05:57
  • Before discussing other approach (with count for instance), let's learn more about the data you work with. I still see nothing to believe the results are incorrect. How do you prove the incorrectness? Let us do this too. Thanks. – Jacek Laskowski Jun 23 '20 at 10:27
  • Added sample data for reference – Prabhanj Jun 23 '20 at 16:54
  • Thanks. Can you paste in a text format to ease copying and playing with it? Have you tried out a batch query over the dataset? Did you get a proper result? – Jacek Laskowski Jun 24 '20 at 08:55
  • 1
    Haven't tried batch query on this one. will paste content – Prabhanj Jun 24 '20 at 09:24
  • 1
    Added input data in text format – Prabhanj Jun 24 '20 at 13:05

1 Answers1

0
def silverToGold(silverPath, goldPath, queryName):
    return (spark.readStream
                .format("delta")
                .load(silverPath)
                .groupBy(weekofyear('eventDate').alias('week'))
                .agg(approx_count_distinct("device_id",rsd=0.01).alias("WAU"))
                .writeStream
                .format("delta")
                .option("checkpointLocation", goldPath +"/_checkpoint")
                .outputMode("complete")
                .start(goldPath)
            )
David Buck
  • 3,752
  • 35
  • 31
  • 35
Aparna Bharadwaj
  • 31
  • 1
  • 1
  • 2