43

We are running a Spark job via spark-submit, and I can see that the job will be re-submitted in the case of failure.

How can I stop it from having attempt #2 in case of yarn container failure or whatever the exception be?

enter image description here

This happened due to lack of memory and "GC overhead limit exceeded" issue.

Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
jk-kim
  • 1,136
  • 3
  • 12
  • 20

4 Answers4

50

There are two settings that control the number of retries (i.e. the maximum number of ApplicationMaster registration attempts with YARN is considered failed and hence the entire Spark application):

  • spark.yarn.maxAppAttempts - Spark's own setting. See MAX_APP_ATTEMPTS:

      private[spark] val MAX_APP_ATTEMPTS = ConfigBuilder("spark.yarn.maxAppAttempts")
        .doc("Maximum number of AM attempts before failing the app.")
        .intConf
        .createOptional
    
  • yarn.resourcemanager.am.max-attempts - YARN's own setting with default being 2.

(As you can see in YarnRMClient.getMaxRegAttempts) the actual number is the minimum of the configuration settings of YARN and Spark with YARN's being the last resort.

Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
  • Since it appears we can use either option to set the max attempts to 1 (since a minimum is used), is one preferable over the other, or would it be a better practice to set both to 1? – schwadan Nov 11 '19 at 15:12
  • @EvilTeach Links fixed. Let me know if you need anything else to make the answer better. Merci beaucoup! – Jacek Laskowski Jul 02 '20 at 16:57
39

An API/programming language-agnostic solution would be to set the yarn max attempts as a command line argument:

spark-submit --conf spark.yarn.maxAppAttempts=1 <application_name>

See @code 's answer

RNHTTR
  • 2,235
  • 2
  • 15
  • 30
8

Add the property yarn.resourcemanager.am.max-attempts to your yarn-default.xml file. It specifies the maximum number of application attempts.

For more details look into this link

Hamza Zafar
  • 1,320
  • 12
  • 17
0

but in general in which cases - it would fail once and recover at the second time - in case of cluster or queue too busy I guess I am running jobs using oozie coordinators - I was thinking to set to 1 - it it fails it will run at the next materialization -

rio
  • 685
  • 9
  • 16