I have some question about YARN?
How can i run my jar file on YARN ?
Exception in thread "main" java.lang.Exception: When running with master 'yarn-cluster' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
Should i copy Spark on every node in this SuperComputer?
in this SuperComputer, all nodes are interconnected, and it has own Architecture.
is there any one who could run a Spark application on a SuperComputer?