Please see this link. It tells how to start a spark-shell and connect it to snappy store.
http://snappydatainc.github.io/snappydata/connectingToCluster/#using-the-spark-shell-and-spark-submit
Essentially you need to provide the locator property and this locator is the same which you have used to start the snappy cluster.
$ bin/spark-shell --master local[*] --conf snappydata.store.locators=locatorhost:port --conf spark.ui.port=4041
Note that with the above a different compute cluster is created to run your program. The snappy cluster is not used for computation when you run your code from this shell. The required table definition and data is fetched in efficient fashion from the snappy store.
In future we might make this shell connect to the snappy cluster in such a way that it uses the snappy cluster itself as its compute cluster.