I'm working my way up to a "Hello World" kind of SnappyData application, which I would like to be able to build and run in IntelliJ. My cluster so far is one locator, one lead, and one server on the local machine. I just want to connect to it, serialize a trivial bit of data or maybe a DataFrame, and see that it's working.
The Documentation says I should be able to do something like this:
val spark: SparkSession = SparkSession
.builder()
.appName("SnappyTest")
.master("xxx.xxx.xxx.xxx:xxxx")
.getOrCreate()
val snappy = new SnappySession(spark.sparkContext)
However, I get "Cannot resolve symbol SnappySession."
Here's what I have in my build.sbt:
name := "snappytest"
version := "0.1"
scalaVersion := "2.11.11"
// https://mvnrepository.com/artifact/io.snappydata/snappy-spark-core_2.11
libraryDependencies += "io.snappydata" % "snappy-spark-core_2.11" % "2.1.1.1"
// https://mvnrepository.com/artifact/io.snappydata/snappy-spark-sql_2.11
libraryDependencies += "io.snappydata" % "snappy-spark-sql_2.11" % "2.1.1.1"
(I refreshed the project after adding those.)
I gather that, when I import something Spark-related such as:
import org.apache.spark.sql.SparkSession
I'm really importing the extended SnappyData version from the dependencies in my build.sbt, not the canonical org.apache.spark version. So that should mean that I could also:
import org.apache.spark.sql.SnappySession
However, I get "Cannot resolve symbol SnappySession." And I don't see anything Snappy-related in the code completion drop-downs when I'm typing. It looks for all the world like vanilla Spark.
What am I missing here? I assume I'm missing something obvious. I can't find examples of the import headers or build statements in the SnappyData documentation, I assume because those kinds of details were too obvious to mention. Except to me. Is anyone here willing to help de-n00batize me on this matter?