9

I feel like I must be missing something obvious here, but I can't seem to dynamically set a variable value in Spark SQL.

Let's say I have two tables, tableSrc and tableBuilder, and I'm creating tableDest.

I've been trying variants on

SET myVar FLOAT = NULL

SELECT
    myVar = avg(myCol)
FROM tableSrc;

CREATE TABLE tableDest(
    refKey INT,
    derivedValue FLOAT
);


INSERT INTO tableDest
    SELECT
        refKey,
        neededValue * myVar AS `derivedValue`
    FROM tableBuilder

Doing this in T-SQL is trivial, in a surprising win for Microsoft (DECLARE...SELECT). Spark, however, throws

Error in SQL statement: ParseException: mismatched input 'SELECT' expecting <EOF>(line 53, pos 0)

but I can't seem to assign a derived value to a variable for reuse. I tried a few variants, but the closest I got was assigning a variable to a string of a select statement.

Databricks Screenshot

Please note that this is being adapted from a fully functional script in T-SQL, and so I'd just as soon not split out the dozen or so SQL variables to compute all those variables with Python spark queries just to insert {var1}, {var2}, etc in a multi hundred line f-string. I know how to do this, but it will be messy, difficult, harder to read, slower to migrate, and worse to maintain and would like to avoid this if at all possible.

ZygD
  • 22,092
  • 39
  • 79
  • 102
Philip Kahn
  • 614
  • 1
  • 5
  • 22
  • how could you calculate avg of a column without group by? – Mahesh Gupta Dec 11 '19 at 08:45
  • by just doing the total average? It's an example, also, just to test if it's working (the real query is operating on a temp table that did all my filtering already). Also using operations other than average, I just chose the simplest case for the question. – Philip Kahn Dec 11 '19 at 17:20

5 Answers5

19

The SET command used is for spark.conf get/set, not a variable for SQL queries

For SQL queries you should use widgets:

https://docs.databricks.com/notebooks/widgets.html

But, there is a way of using spark.conf parameters on SQL:

%python spark.conf.set('personal.foo','bar')

Then you can use:

$sql select * from table where column = '${personal.foo}';

The trick part is that you have to use a "dot" (or other special character) on the name of the spark.conf, or SQL cells will expect you to provide value to the $variable on run time (It looks like a bug to me, i believe rounding with {} should be enough)

Ronieri Marques
  • 379
  • 3
  • 7
7

Databricks just released SQL user defined functions, which can deal with the similar problem with no performance penalty, for your example it would look like:

CREATE TEMP FUNCTION myVar()
RETURNS FLOAT
LANGUAGE SQL
RETURN 
SELECT
    avg(myCol)
FROM tableSrc;

And then for use:

SELECT
      refKey,
      neededValue * myVar() AS `derivedValue`
FROM tableBuilder
matkurek
  • 553
  • 5
  • 12
  • 1
    This probably is the best current answer and a good thing to know. Thanks for the update! I hadn't heard about this. – Philip Kahn Dec 13 '21 at 23:05
  • please help on this df = sqlContext.sql("SELECT * FROM $SourceTableName where 1=2") where $SourceTableName is Parameter – user3843858 Jun 16 '22 at 15:23
  • @user3843858 Assign value of your parameter to a python variable SourceTableName and then do: df = sqlContext.sql(f"SELECT * FROM {SourceTableName} where 1=2") – matkurek Jun 17 '22 at 06:47
  • except... it appears that the temp function can't be used to fake setting an external variable to later use for the parameter of another function later on. – user1390375 Oct 05 '22 at 07:16
  • this also means that the function will run the query everytime its called. That might be costly if the aggregate function is running on a huge dataset. Setting a variable will be best. I hope they find a solution soon – tendaitakas Oct 05 '22 at 15:00
2

I've circled around this issue for a long time. Finally, I've found a workaround using @Ronieri Marques solution plus some pyspark functions. I'll try to provide a full working code below:

first I create a sample table:

%sql
create table if not exists calendar
as 
select '2021-01-01' as date
union
select '2021-01-02' as date
union
select '2021-01-03' as date

%sql 
-- just to show the max and min dates
select max(date), min(date) from calendar

Combining sqlContext + toJSON it is possible to dynamically assign a value to the variable, in this case I use a query:

%python
result = sqlContext.sql("select max(date), min(date) from calendar").toJSON()
spark.conf.set('date.end'    , result.first()[14:24])
spark.conf.set('date.start'  , result.first()[39:49])

Finally it will be possible to use the variables inside a SQL query:

%sql 
select * from calendar where date > '${date.start}' and date < '${date.end}'

Note that the substring result.first()[14:24] and result.first()[39:49] are necessary because the value of result.first() is {"max(date)":"2021-01-03","min(date)":"2021-01-01"} so we need to "tailor" the final result picking up only the values we need.

Probably the code can be polished but right now it is the only working solution I've managed to implement.

I hope this solution could be useful for someone.

Nicola
  • 51
  • 5
2

Databricks now has widgets for SQL also https://docs.databricks.com/notebooks/widgets.html#widgets-in-sql

CREATE WIDGET TEXT p_file_date DEFAULT "2021-03-21";
Select * from results where results.file_date = getArgument("p_file_date")
Vibha
  • 939
  • 9
  • 17
1

You are missing a semi-colon at the end of the variable assignment.

SET myVar FLOAT = NULL;
...

Hope it helps :)

Pol Ortiz
  • 461
  • 6
  • 14
  • 2
    Thanks for the comment! I ended up doing it the hard way with a table of variables I populated in Python, and don't have the time to review this project at the moment; when I do, if I can confirm your solution works, I'll accept this as the answer. (I'll feel really silly if that's all it took...) – Philip Kahn Apr 15 '20 at 18:37