2

I am working with R on (Azure) Databricks and wanted to enable Apache Arrow for I/O. However, using below sample code, I'm getting some weird errow that I cannot trace back.

The error is occurring on clusters using Databricks runtime ML7.0 (Spark 3.0.0) and ML7.1 (Spark 3.0.0).

library(arrow)
library(dplyr)
library(SparkR)

arrow::arrow_available()
#TRUE

# initialize Spark session using Arrow
SparkR::sparkR.session(sparkConfig = list(spark.sql.execution.arrow.sparkr.enabled = "true"))

# create Spark DataFrame
df <- mtcars
spark_df <- cache(createDataFrame(df))

# write spark_df as parquet
sink_path <- "/dbfs/FileStore/testData"
file_path <- "dbfs:/FileStore/testData/arrow_testFile"
dir.create(sink_path , recursive=T, showWarnings=F)    
SparkR::write.parquet(spark_df, file_path, mode = "overwrite")

# read parquet file as Spark DataFrame and cache
file_path %>%
    SparkR::read.parquet() %>%
    SparkR::cache() -> sdf_new

# collect sdf_new
sdf_new %>%
    SparkR::collect() -> rdf_new

The error message I am getting is the following:

 Error : 'as_tibble' is not an exported object from 'namespace:arrow' 

I know that some changes regarding "as_tibble" went on, but for me it is unclear how I can ommit this error and make the Arrow fly.

K.O.T.
  • 111
  • 10

0 Answers0