We are planning to setup Apache Spark 3.0 outside of existing HDP 2.6 cluster and to submit the jobs using yarn(v2.7) in that cluster without upgrade or modifying. Currently users are using Spark 2.3 which is included in the HDP stack. Goal is to enable Apache Spark 3.0 outside if HDP cluster without interrupting the current jobs.
What are the best approaches for this? Setup apache 3.0 client nodes outside of HDP cluster and submit it from new client nodes?
Any recommendations on this? Things to avoid conflict with current HDP stack and its components?