We are running a Spark job via spark-submit
, and I can see that the job will be re-submitted in the case of failure.
How can I stop it from having attempt #2 in case of yarn container failure or whatever the exception be?
This happened due to lack of memory and "GC overhead limit exceeded" issue.