I am attempting to run a basic MapReduce program on macOS 10.12 that retrieves the maximum temperature from a log file of weather data. When running the job, I receive the following stack trace:
Stack trace: ExitCodeException exitCode=126:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The stderr log file for the job in the resource manager contains the following message: "/bin/bash: /bin/java: is a directory".
I was originally receiving a similar error message, "/bin/bash: /bin/java: No such file or directory", but modified the hadoop-config.sh script as suggested by the answer in this post. I modified it as follows:
if [[ -z $JAVA_HOME ]]; then
# On OSX use java_home (or /Library for older versions)
if [ "Darwin" == "$(uname -s)" ]; then
if [ -x /usr/libexec/java_home ]; then
export JAVA_HOME=${JAVA_HOME}
else
export JAVA_HOME=${JAVA_HOME}
fi
fi
My $JAVA_HOME variable is set to: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home
Is this a result of a configuration issue with my JAVA_HOME variable?