Questions tagged [databricks-connect]

172 questions
0
votes
1 answer

How to list the files in azure data lake using spark from pycharm(local IDE) which is connected using databricks-connect

I am working on some code on my local machine on pycharm. The execution is done on a databricks cluster, while the data is stored on azure datalake. basaically, I need to list down the files in azure datalake directory and then apply some reading…
Amitoz
  • 30
  • 7
0
votes
0 answers

How to integrate Eclipse IDE with Databricks Cluster

I am trying to integrate my Scala Eclipse IDE with my Azure Databricks Cluster so that I can directly run my Spark program through Eclipse IDE on my Databricks Cluster. I followed the official documentation of Databricks…
0
votes
2 answers

How to generate a databricks privileged token which is valid more than 48 hours

Would like to run a local Jupiter notebook connecting to Azure databricks cluster, and need to use dbutils to get secrets. This requires to save a privileged token locally and it is only lasting for 2 days. Is there any way to generate token longer…
zzzk
  • 135
  • 10
0
votes
0 answers

Code errors out from IntelliJ but runs well on the Databricks Notebook

I develop Spark code using Scala APIs on IntelliJ and when I run this I get below error but runs well on the Databricks notebook though. I am using Databricks Connect to connect from local installation of IntelliJ to the Databricks Spark Cluster. I…
0
votes
1 answer

Attribute error while creating scope to access Azure Datalake Gen2 from Databricks

I was trying to setup using the scopes and I am having a few issues. Any help would be appreciated. I ran the below commands in Databricks CLI databricks secrets create-scope --scope dnb-dlg2-dbrcks-scp-stg databricks secrets put --scope…
0
votes
1 answer

Spark DataFrame to ADW table: unable to insert column with chars >4k

I have spark dataframe, on DataBricks, with a column with value having character length > 10,000. I need to insert this into Azure Data Warehouse (ADW) table but I get an error if the column char length is above 4,000. Error: Unexpected error…
-2
votes
1 answer

How to solve "Exception in thread "main" java.lang.Error:Unresolved compilation problems:SparkSession cannot be resolved to a type" in java spark

I installed "VScode, jdk 8, python 3.8 and databricks-connect==8.1.*" in Azure Windows Virtual Machine. After that I created a databricks cluster and configured Databricks-connect by using cmd. After setting all the path variables I executed…
1 2 3
11
12