0

I'm submitting the spark job using Livy batch api as shown below. Here I'm passing the .p12 as the files params which will be used later in the application for ssl communication.

{
   "className":"com.StreamingMain",
   "name":"StreamingMain.single",
   "conf":{
      "spark.yarn.submit.waitAppCompletion":"false",
      "spark.hadoop.fs.azure.enable.flush":"false",
      "spark.executorEnv.CLUSTER":"dev",
      "spark.executorEnv.NAME_SPACE":"dev",
      "spark.executorEnv.AZURE_ACCOUNT_NAME":"istoragedev",
      "spark.executorEnv.KAFKA_HOST":"",
      "spark.executorEnv.KAFKA_PORT":"",
      "spark.executorEnv.KAFKA_USER":"",
      "spark.executorEnv.KAFKA_PASSWD":"+++",
      "spark.executorEnv.HANA_DATA_LAKE_FILE_SYSTEM_URI":"",
      "spark.executorEnv.HANA_DATA_LAKE_PK12_LOCATION":"",
      "spark.executorEnv.HANA_DATA_LAKE_PASSWORD":"/vv8Mg==",
      "spark.sql.legacy.parquet.int96RebaseModeInRead":"LEGACY",
      "spark.sql.legacy.parquet.int96RebaseModeInWrite":"LEGACY",
      "spark.sql.legacy.parquet.datetimeRebaseModeInRead":"LEGACY",
      "spark.sql.legacy.parquet.datetimeRebaseModeInWrite":"LEGACY",
      "spark.sql.legacy.timeParserPolicy":"LEGACY"
   },
   "args":[
      "abfs://streaming/cs-dev.cs-dev.json"
   ],
   "driverMemory":"2g",
   "executorMemory":"12g",
   "driverCores":1,
   "executorCores":8,
   "numExecutors":1,
   "jars":[
      "abfs://dp/dp.jar"
   ],
   "file":"abfs://dp/dp.jar",
   "files":[
      "/app/pk12/client-keystore.p12"
   ]
}

My question is will the client-keystore.p12 will get copied to spark cluster? If yes, what is the file path of client-keystore.p12 i.e., in which location it got copied and how to find it?

Any help would be appreciated

coders
  • 719
  • 1
  • 11
  • 35

0 Answers0