24

I'm deploying a Spark data processing job on an EC2 cluster, the job is small for the cluster (16 cores with 120G RAM in total), the largest RDD has only 76k+ rows. But heavily skewed in the middle (thus requires repartitioning) and each row has around 100k of data after serialization. The job always got stuck in repartitioning. Namely, the job will constantly get following errors and retries:

org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle

org.apache.spark.shuffle.FetchFailedException: Error in opening FileSegmentManagedBuffer

org.apache.spark.shuffle.FetchFailedException: java.io.FileNotFoundException: /tmp/spark-...

I've tried to identify the problem but it seems like both memory and disk consumption of the machine throwing these errors are below 50%. I've also tried different configurations, including:

let driver/executor memory use 60% of total memory.
let netty to priortize JVM shuffling buffer.
increase shuffling streaming buffer to 128m.
use KryoSerializer and max out all buffers
increase shuffling memoryFraction to 0.4

But none of them works. The small job always trigger the same series of errors and max out retries (upt to 1000 times). How to troubleshoot this thing in such situation?

Thanks a lot if you have any clue.

tribbloid
  • 4,026
  • 14
  • 64
  • 103

3 Answers3

13

Check your log if you get an error similar to this.

ERROR 2015-05-12 17:29:16,984 Logging.scala:75 - Lost executor 13 on node-xzy: remote Akka client disassociated

Every time you get this error is because you lose an executor. As why you lost an executor, that is another story, again check your log for clues.

One thing Yarn can kill your job, if it thinks that see you are using "too much memory"

Check for something like this:

org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl  - Container [<edited>] is running beyond physical memory limits. Current usage: 18.0 GB of 18 GB physical memory used; 19.4 GB of 37.8 GB virtual memory used. Killing container.

Also see: http://apache-spark-developers-list.1001551.n3.nabble.com/Lost-executor-on-YARN-ALS-iterations-td7916.html

The current state of the art is to increase spark.yarn.executor.memoryOverhead until the job stops failing. We do have plans to try to automatically scale this based on the amount of memory requested, but it will still just be a heuristic.

LabOctoCat
  • 611
  • 7
  • 17
  • 2
    Thanks a lot! My problem is that I'm simply using Spark Standalone master. The executor lost is indeed a problem for large shuffling, since each takes a long time to write into a non-persistent storage and once its lost it has to be start over. I'm investigating if frequent checkpointing can solve the problem – tribbloid May 15 '15 at 01:43
  • Did you gain insight into this? – raam86 Nov 05 '15 at 11:44
  • I've rewrote my workflow by GC some persisted RDD manually and replacing other persist() with checkpoint() to free more memory and space. It disappeared for the moment. But considering the profile of memory/disk consumption when it errored out it shouldn't happen in the first place. I'll update when I encounter this again – tribbloid Nov 13 '15 at 20:07
2

I was also getting error

org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle

and looking further in log I found

Container killed on request. Exit code is 143

After searching for the exit code, I realized that's its mainly related to memory allocation. So I checked the amount of memory I have configured for executors. I found that by mistake I had configured 7g to driver and only 1g for executor. After increasing the memory of executor my spark job ran successfully.

0

Seems like after I do the changeQueue operation used may cause this problem, the server has been changed after I changed the queue.

Jarris
  • 11
  • 4