I am using Snappydata to run some queries, and use the sql with statement:
WITH x AS (
SELECT DISTINCT col_a, col_b
FROM table_a
)
INSERT INTO table_b
SELECT x.col_a, x.col_b
FROM x
JOIN table_c c ON x.col_a = c.col_a and x.col_b = c.col_b
This sql code runs fine when running in local mode, but when I submit the compiled jar file to Snappydata cluster, it throws an error saying the table "APP.X" does not exist,
org.apache.spark.sql.TableNotFoundException: Table 'APP.X' not found;
any idea why this happens?