3

I am completely baffled with the following problem:

When I join 2 data frames and return the row count, I get a slightly different count on each try. Here are the details:

I would like to join the data frames: 'df_user_ids' and 'df_conversions':

df_user_ids.show()
>>>
+--------------------+
|             user_id|
+--------------------+
|AMsySZY-cqcufnXst...|
|AMsySZY1Oo75A6vKU...|
|AMsySZY4nbqZiuEMR...|
|AMsySZY5RSfgj6Xvi...|
|AMsySZY5geAmTx0er...|
|AMsySZY6Gskv_kEAv...|
|AMsySZY6MIOyPWM4U...|
|AMsySZYCEZYS00UB9...| 

df_conversions.show()
>>>
+--------------------+----------------------+---------+
|             user_id|time_activity_observed|converted|
+--------------------+----------------------+---------+
|CAESEAl1YPOZpaWVx...|   2018-03-23 12:15:37|        1|
|CAESEAuvSBzmfc_f3...|   2018-03-23 21:58:25|        1|
|CAESEBXWsSYm4ntvR...|   2018-03-30 12:16:53|        1|
|CAESEC-5uPwWGFdnv...|   2018-03-23 08:52:48|        1|
|CAESEDB3Z-NNvz7zL...|   2018-03-24 21:37:05|        1|
|CAESEDu7S7rGTVlj2...|   2018-04-01 17:00:12|        1|
|CAESEE4s6g1-JlUEt...|   2018-03-23 19:32:23|        1|
|CAESEELlJt0mE2xjn...|   2018-03-24 18:26:15|        1|

Both data frames have the key column named: "user_id", and both are created using ".sampleBy()" with a fixed seed:

.sampleBy("converted", fractions={0: 0.035, 1: 1}, seed=0)    

Before I join the data frames I persist them to disk:

df_user_ids.persist(StorageLevel.DISK_ONLY)
df_conversions.persist(StorageLevel.DISK_ONLY) 

Then I verify that the row count of both data frames is consistent:

df_user_ids.count()
>>> 584309

df_user_ids.count()
>>> 584309

df_conversions.count()
>>> 5830

df_conversions.count()
>>> 5830

And check that the key column of both data frames does not contain duplicates:

df_user_ids.count()
>>> 584309

df_user_ids.select('user_id').distinct().count()
>>> 584309

df_conversions.count()
>>> 5830

df_conversions.select('user_id').distinct().count()
>>> 5830

Then I get the inconsistent row counts when I join them!

df_user_ids.join(df_conversions, ["user_id"], "left").count()
>>> 584314

df_user_ids.join(df_conversions, ["user_id"], "left").count()
>>> 584317

df_user_ids.join(df_conversions, ["user_id"], "left").count()
>>> 584304

How is this possible??

Sometimes this joined count is higher than "df_user_ids.count()" and sometimes it is lower. I am using a Zeppelin notebook in AWS EMR on an EMR cluster to run this code.

I already tried what is suggested in the link below:

  • ".persist(StorageLevel.DISK_ONLY)" doesn't help.
  • I don't use monotonically_increasing_id.

spark inconsistency when running count command

Johnny M
  • 359
  • 2
  • 7

1 Answers1

0

By looking at the series of operations you are doing on DataFrames, i think the issue is due to Join. Join operation results shuffle, where every node talks to every other node and they share data according to which node has a certain key or set of keys (on which you are joining). When sharing data across executors, if executor doesnt have the dataframe persisted on Disk, it will re-compute the DAG and sampleBy is not guaranteed to return the same fraction of rows in dataframe.

Lakshman Battini
  • 1,842
  • 11
  • 25
  • So setting a seed value on sampleBy and persisting the result might not be enough to generate a consistent result? – Johnny M Dec 05 '18 at 18:24
  • 1
    I found using `pyspark.sql.functions.explode` also results in inconsistent `count()` of the output dataframe if I don't persist the output first. – panc Aug 01 '22 at 18:46