4

I have an oozie configuration:

 <spark xmlns="uri:oozie:spark-action:0.1">
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <configuration>
            <property>
                <name>mapred.job.queue.name</name>
                <value>batch_2</value>
            </property>
            <property>
                <name>job.queue.name</name>
                <value>batch_1</value>
            </property>

        </configuration>
        <master>yarn-cluster</master>
        <mode>cluster</mode>
        <name>Batch Search Oozie</name>
        <class>eu.inn.ilms.batchSearch.BatchSearch</class>
        <jar>hdfs:///user/oozie/workflows/batchsearch/lib/batchSearch-0.0.1-SNAPSHOT.jar
        </jar>
        <arg>${zookeeperQuorum}</arg>
        <arg>${solrQuery}</arg>
        <arg>${hdfsFolderPaths}</arg>
        <arg>${solrFinalCollection}</arg>
        <arg>${mongoServiceUrl}</arg>
    </spark>

The map-reduce job is executed on the queue that I want it to. But the spark job still execute on default. Is there a properties that will allow me to set this?

Alessandro La Corte
  • 419
  • 3
  • 6
  • 18
  • 1
    FYI, action properties labeled `oozie.launcher.x.y.z` will be applied to the Oozie "launcher" (a dummy mapper) that is used to bootstrap shell / java / sqoop / spark actions, as `x.y.z`; while action properties labeled directly `x.y.z` should be applied to the child Yarn job spawned by sqoop / spark -- unless the Spark driver has its own override rules... Also Oozie has some quirks and occasional regressions. – Samson Scharfrichter Jul 16 '17 at 15:13

1 Answers1

8

Use spark-opts tag:

<spark-opts> --queue ${queue_name} </spark-opts>
philantrovert
  • 9,904
  • 3
  • 37
  • 61