I have used snappy-sql and there I created some tables and did some inserts and queries... everything ok
Then, as I need to import a lot of data from csv file I created a scala script that read each of the files and extract the data and tries to insert into the database
For this I am using the spark that comes with snappydata, I connect using
./bin/spark-shell --conf spark.snappydata.store.sys-disk-dir=snappydatadir --conf spark.snappydata.store.log-file=snappydatadir/quickstart.log
The directory exists and everything "runs" "ok"... (not quite true)
Here is the problem... when I try to do queries over the tables I created in snappy-sql... the spark-shell tells me that the tables do not exist... and when the script gets to the insert command happens the same
So my question, as I am a newbi...
How do I connect to that spark-shell (snappydir/bin/spark-shell...) and use the tables that already exists in snappydata??
I bet I am nod adding some specific configuration...
Thanks for the help... as I said... I am less than basic in snappydata and spark so I am feeling a little lost trying to configure and set-up my environment...