1

In Synapse Analytics I can write the following SQL script and it works fine:

SELECT Table_name FROM dataverse_blob_blob.information_schema.tables WHERE Table_name NOT LIKE '%_partitioned' ORDER BY 1

I am trying to do the same using a Notebook:

%%sql
SELECT Table_name FROM `dataverse_blob_blob`.`information_schema`.`tables`

And it throws the error: Error: spark_catalog requires a single-part namespace, but got [dataverse_blob_blob, information_schema]

Tried using USE CATALOG and USE SCHEMA to set the catalog/schema but that throws an error too.

  • I don't think it works that way. Lake Database tables are exposed through the Serverless SQL engine (Polaris) as External Tables, but the Spark engine has no access to Serverless SQL assets. – Joel Cochran Nov 03 '22 at 16:23

1 Answers1

0

I have reproduced this in my environment and got below results.

I have created a serverless SQL database named as mydb.

enter image description here

I tried to execute your command in Synapse notebook, and I got same error.

enter image description here

As commented by @Joel Cochran The spark pools does not have access to the Serverless SQL database and it is same for Lake databases also as those can be executed by serverless pools. Thats why it is giving the above error.

If you want to get those in notebooks, try to copy them into a ADLS or blob and then using mounting you can access those in spark.

Rakesh Govindula
  • 5,257
  • 1
  • 2
  • 11