I have a spark notebook which I am running with the help of pipeline. The notebook is running fine manually but in the pipeline it is giving error for file location. In the code I am loading the file in a data frame. The file location in the code is abfss://storage_name/folder_name/* and in pipeline it is taking abfss://storage_name/filename.parquet\n
This is the error { "errorCode": "6002", "message": "org.apache.spark.sql.AnalysisException: Path does not exist: abfss://storage_name/filename.parquet\n at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:806)\n\n at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:803)\n\n at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:372)\n\n at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)\n\n at scala.util.Success.$anonfun$map$1(Try.scala:255)\n\n at scala.util.Success.map(Try.scala:213)\n\n at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)\n\n at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)\n\n at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)\n\n at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)\n\n at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)\n\n at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)\n\n at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)\n\n at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)\n\n at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)\n", "failureType": "UserError", "target": "notebook_name", "details": [] }