I want to use the Machine Learning capabilities of Apache Spark through a RESTful API. Therefore I use the Spark Job Server. I already developed an interface for the communication but figured out that, while I am using the Persistent Context Mode, I can't save objects like a trained model between different job circles. I can't find any documentation on how to actually implement a persistent job with JAVA. I am also quite new to Apache Spark and have no clue of Scala. I also don't want to start over with the development process and would be very happy if somebody can share his experience in how to persist JAVA objects between Apache Spark Jobserver jobs or point me to a good example or documentation.
For the beginning, it would be even sufficient to at least serialize an object and save it to the disk. But I also wasn't succesful with that inside of the Apache Spark Jobserver. I used a simple code like the following, but this probably doesn't work as simple in Spark
try {
FileInputStream streamIn = new FileInputStream("D:\\LRS.ser");
ObjectInputStream objectinputstream = new ObjectInputStream(streamIn);
LRService = (LogisticRegressionService) objectinputstream.readObject();
objectinputstream.close();
} catch (Exception e) {
e.printStackTrace();
}