I have a loop with too many iterations and a function which is computation heavy in Reducer function.
while (context.getCounter(SOLUTION_FLAG.SOLUTION_FOUND).getValue() < 1 && itrCnt < MAX_ITR)
MAX_ITR is iterations count - user input
The problem is when I run it on Hadoop cluster there is timeout error and Reducer task is killed
17/05/06 21:09:43 INFO mapreduce.Job: Task Id : attempt_1494129392154_0001_r_000000_0, Status : FAILED
AttemptID:attempt_1494129392154_0001_r_000000_0 Timed out after 600 secs
What should I do to avoid timeout? (My guess is heartbeat signals.)