2

I have a scenario where I need to trigger many sql queries in parallel using for loops and collect the lists of results into a ListBuffer. however, I get many errors running the loops and the results are not complete. to exemplify i made a very simplistic reproducible example:

import scala.collection.mutable.ListBuffer
val dummy = List("a","b").toDF.createOrReplaceTempView("df")
spark.catalog.cacheTable("df")
val dig = (0 to 9).par
var counter = 0:Int
var results = ListBuffer[List[org.apache.spark.sql.Row]]()

for (i1 <- dig ) {
   for (i2 <- dig ) {
     for (i3 <- dig ) {
        println("||==="+i1+"=="+i2+"=="+i3+"===="+(i1*100+i2*10+i3*1)+"===="+counter+"=======||")
        counter +=1
        results += spark.sql("select 'trial','"+i1+"','"+i2+"','"+i3+"','"+(i1*100+i2*10+i3*1)+"','"+counter+"',*  from df ").collect().toList
       }
     }
   }
results(0).take(2).foreach(println)
results.size
results.flatten.size

the above code simply counts from 0 to 999, each count inserts a list of 2 rows into a ListBuffer. table along with a 'serial' counter value for comparison

running the code results with:

||===9==8==3====983====969=======||
||===9==8==5====985====969=======||
||===9==8==1====981====969=======||
||===9==8==2====982====969=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 784
||===9==8==7====987====974=======||
||===5==8==9====589====975=======||
||===9==8==4====984====976=======||
||===9==8==6====986====976=======||
||===9==8==9====989====977=======||
||===9==8==8====988====977=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 773
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 790
||===5==9==0====590====980=======||
||===5==9==2====592====980=======||
||===5==9==5====595====980=======||
||===5==9==1====591====980=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 795
||===5==9==3====593====984=======||
||===5==9==7====597====985=======||
||===5==9==8====598====985=======||
||===5==9==6====596====987=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 798
||===5==9==9====599====988=======||
||===5==9==4====594====989=======||
||===9==9==0====990====990=======||
||===9==9==5====995====991=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 784
||===9==9==2====992====992=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 789
||===9==9==3====993====993=======||
||===9==9==1====991====994=======||
||===9==9==4====994====995=======||
||===9==9==7====997====996=======||
||===9==9==8====998====997=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 790
||===9==9==6====996====998=======||
||===9==9==9====999====999=======||
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 805
16/09/20 14:10:05 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 798

scala> results(0).take(2).foreach(println)
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 802
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 805
[trial,0,0,0,0,16,a]
[trial,0,0,0,0,16,b]

scala> results.size
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 839
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 840
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 839
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 842
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 855
res3: Int = 1000

scala> results.flatten.size
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 860
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 854
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 860
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 868
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 874
res4: Int = 2000
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 882

scala> 

[Stage 589:=(28 + 0) / 28][Stage 590:>(27 + 1) / 28][Stage 591:>(20 + 7) / 28]16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 888
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 895
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 898
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 898
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 905
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 906
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 907
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 902
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 905
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 913
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 915
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 916
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 913
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 920
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 942
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 946
16/09/20 14:10:06 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 942
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 946
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 948
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 956
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 952
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 965
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 965
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 966
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 976
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 976
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 990
16/09/20 14:10:07 WARN org.apache.spark.ui.jobs.JobProgressListener: Job completed for unknown job 999


scala> 

and these are but some of the WARN I get.

you can see that the counter 'staggers' sometimes

**this is where trouble begins **

alot of warning but results.size=1000 and results.flatten.size = 2000 as expected.

however trying to count to 10000 the same way results in even more warnings :

import scala.collection.mutable.ListBuffer
val dummy = List("a","b").toDF.createOrReplaceTempView("df")
spark.catalog.cacheTable("df")
val dig = (0 to 9).par
var counter = 0:Int
var results = ListBuffer[List[org.apache.spark.sql.Row]]()

for (i1 <- dig ) {
   for (i2 <- dig ) {
     for (i3 <- dig ) {
       for (i4 <- dig ) {
         println("||==="+i1+"=="+i2+"=="+i3+"=="+i4+"===="+(i1*1000+i2*100+i3*10+i4*1)+"===="+counter+"=======||")
         counter +=1
         results += spark.sql("select 'trial','"+i1+"','"+i2+"','"+i3+"', '"+i4+"','"+(i1*1000+i2*100+i3*10+i4*1)+"','"+counter+"',*  from df ").collect().toList
       }
     }
   }
 }
results(0).take(2).foreach(println)
results.size
results.flatten.size

output:

16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8797
||===0==9==4==3====943====9998=======||
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8799
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8801
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8802
||===0==9==4==4====944====9999=======||
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8803
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8804
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8805
16/09/20 14:18:24 WARN org.apache.spark.ui.jobs.JobProgressListener: Task start for unknown stage 8806

and the results:

scala> results(0).take(2).foreach(println)
[trial,3,0,0,0,3000,7,a]
[trial,3,0,0,0,3000,7,b]

scala> results.size
res3: Int = 9999

scala> results.flatten.size
res4: Int = 19998

which is missing one value.

I invite you to try the following code counting to 100000:

import scala.collection.mutable.ListBuffer
val dummy = List("a","b").toDF.createOrReplaceTempView("df")
spark.catalog.cacheTable("df")
val dig = (0 to 9).par
var counter = 0:Int
var results = ListBuffer[List[org.apache.spark.sql.Row]]()

for (i0 <- dig ) {
  for (i1 <- dig ) {
    for (i2 <- dig ) {
      for (i3 <- dig ) {
        for (i4 <- dig ) {
          println("============="+i0+"=="+i1+"=="+i2+"=="+i3+"=="+i4+"===="+(i0*10000+i1*1000+i2*100+i3*10+i4*1)+"===="+counter+"=========") 
          counter +=1
          results += spark.sql("select 'trial','"+i0+"','"+i1+"','"+i2+"','"+i3+"', '"+i4+"','"+(i0*10000+i1*1000+i2*100+i3*10+i4*1)+"','"+counter+"',*  from df ").collect().toList
        }
      }
    }
  }
}

not only I get tons ofJobProgressListener warnings during run, the results are incomplete and non-deterministic:

scala> results(0).take(2).foreach(println)
[trial,8,5,0,0,0,85000,13,a]
[trial,8,5,0,0,0,85000,13,b]

scala> results.size
res3: Int = 99999

scala> results.flatten.size
res4: Int = 192908

on my real life example I often get “spark.sql.execution.id is already set” exception at random points of the run

how can I fix this?

I have tried

spark.conf.set("spark.extraListeners","org.apache.spark.scheduler.StatsReportListener,org.apache.spark.scheduler.EventLoggingListener")

and read Spark 1.6: java.lang.IllegalArgumentException: spark.sql.execution.id is already set

and Apache Spark: network errors between executors

and http://docs.scala-lang.org/overviews/parallel-collections/overview.html about side-effecting operations but it seems there are too many directions.

the bug that seems most related to this issue imho is https://issues.apache.org/jira/browse/SPARK-10548 which was supposed to be resolved in spark 1.6

can anyone offer some hints about solving this situation? the complexity of my real world case is similar to the 100000 count and fails upon execution on random stages.

I deploy a GCS dataproc cluster

gcloud dataproc clusters create clusTest --zone us-central1-b --master-machine-type n1-highmem-16 --num-workers 2 --worker-machine-type n1-highmem-8 --num-worker-local-ssds 2 --num-preemptible-workers 8 --scopes 'https://www.googleapis.com/auth/cloud-platform' --project xyz-analytics

a screenshot of spark UI durning run

Community
  • 1
  • 1
Zahiro Mor
  • 1,708
  • 1
  • 16
  • 30
  • What version of Spark are you on? – Sachin Tyagi Sep 27 '16 at 07:32
  • spark 2.0 on google gcs – Zahiro Mor Sep 27 '16 at 07:33
  • java.lang.IllegalArgumentException: spark.sql.execution.id is already set at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:81) at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182) at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2187) at ... – Zahiro Mor Sep 29 '16 at 07:52
  • 1
    Looks like a bug to me. Apparently, some others are also reporting re-seeing this issue in 2.0 ( https://issues.apache.org/jira/browse/SPARK-13747 ) even though it is marked as resolved. Will have to dig through the code to understand the root cause. – Sachin Tyagi Sep 29 '16 at 11:01
  • yes, I have seen this bug and hoped it relates. – Zahiro Mor Sep 29 '16 at 12:07

1 Answers1

2

the results are incomplete and non-deterministic

The non-deterministic part should give a hint. You're getting into a race condition while adding the results into your ListBuffer (it's not really threadsafe to be updated in parallel, so you end up losing some results if you run it long enough.)

I tried it locally and could reproduce this incomplete result problem. Simply adding a synchronized block for appending to Buffer made the results complete. You can also use other synchronized data structures for you job so you need not put an explicit synchronized block e.g. java.util.concurrent.ConcurrentLinkedQueue or something.

So the following solves this problem:

for (i1 <- dig ) {
   for (i2 <- dig ) {
     for (i3 <- dig ) {
       for (i4 <- dig ) {
         counter +=1
         val result = spark.sql("select 'trial','"+i1+"','"+i2+"','"+i3+"', '"+i4+"','"+(i1*1000+i2*100+i3*10+i4*1)+"','"+counter+"',*  from df ").collect().toList
         synchronized {
           results += result
         }
       }
     }
   }
 }

As for the “spark.sql.execution.id is already set” exception: I am unable to reproduce it with the example given above. (However, I am running the above code on local Spark.) Is it reproducible on local setup?

Sachin Tyagi
  • 2,814
  • 14
  • 26