I'm developing some ETL jobs using Mosaic Decisions. While running the job, it's submitting the job to Spark using the default configurations. This default configuration is really huge and I don't need that much for development (as I am using less number of records for unit testing while development).
Is there a way I can instruct Mosaic to use less Spark resources for my development? So that I won't unnecessarily block the resources of the cluster?