I have read through the documentation but can't find answer for the following questions:
I would prefer to setup an already running spark cluster (i.e. add a jar to be able to use SnappyContext), or is it mandatory to use bundled spark? If possible, please assist: SPARK_HOME seems to be set on runtime by the launchers
Where to define JAVA_HOME?. For now I did it in bin/spark-class on all snappy server nodes
Build SnappyData with scala 2.11
Appreciated, Saif