I'm playing around with Livy/Spark and am a little confused on how to use some of it. There's an example in the livy examples folder of building jobs that get uploaded to spark. I like the interfaces that are being used, but I want to interface to livy/spark via http as I don't have a java client. With that it seems that if I use the livyclient to upload jars it only exists within that spark session. Is there a way to upload livyjobs to spark and then have that be persistent across all of spark? Would it be better to make those jobs/apps in spark instead?
Honestly I'm trying to figure out what the best approach would be. I want to be able to do interactive things via the shell, but I also want to make custom jobs for algorithms not available in spark that I would use frequently. I'm not sure what way I should tackle this. Any thoughts? How should I be using Livy? Just as the rest service to spark and then handle building custom apps/methods in spark?
eg:
Say I have some javascript application, and I have some data I can load, and I want to run algorithm x on it. algorithm x is or isn't implemented in spark, but by pressing that button I want to get that data into spark, be it getting put into hdfs or pulled from elasticsearch or whatever. If I have livy I'd want to call some rest command in livy to do that and it then runs that particular algorithm. What's the standard way of doing this?
Thanks