We have a complex finance / portfolio analytics that we would like to take advantage of Spark.
Instead of having the application submit isolated jars that perform the computation and then having to retrieve the data out of SQL, how viable would it be to simply have the entire application run as a Spark driver so that the results from Spark can be seamlessly accessed from the main application?
Is this a recommended use case of Spark? What would be the potential disadvantages of this approach? Would there be any performance or latency implications?