0

How to manage this cast?

scala.math.BigDecimal cannot be cast to java.math.BigDecimal

While Reading the data from Redshift and Writing to S4HANA via JDBC.

JDBC Connection.

connection_sap_options_pfd = {
    "url": db_url,
    "dbTable": 'dbTable_name',
    "user": db_username,
    "password":db_password,
    "customJdbcDriverS3Path": driver_path,
    "className": jdbc_driver_name
}

Writing the dataframe

glueContext.write_dynamic_frame.from_options(frame = pfd_dynamicFrame, connection_type="custom.jdbc", 
connection_options=connection_sap_options_pfd, 
transformation_ctx = "datasink_pfd")

pfd_dynamicFrame has one column which is of type Decimal(4, 0) while writing i have the below exception.

py4j.protocol.Py4JJavaError: An error occurred while calling o109.pyWriteDynamicFrame.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 9.0 failed 4 times, most recent failure: Lost task 1.3 in stage 9.0 (TID 51) (44.109.37.157 executor 4): 
java.lang.ClassCastException: scala.math.BigDecimal cannot be cast to java.math.BigDecimal
    at org.apache.spark.sql.Row.getDecimal(Row.scala:285)
    at org.apache.spark.sql.Row.getDecimal$(Row.scala:285)
    at org.apache.spark.sql.catalyst.expressions.GenericRow.getDecimal(rows.scala:166)
    at com.amazonaws.services.glue.marketplace.partner.PartnerJDBCRecordWriter.$anonfun$makeSetter$12(PartnerJDBCDataSink.scala:276)
    at com.amazonaws.services.glue.marketplace.partner.PartnerJDBCRecordWriter.$anonfun$makeSetter$12$adapted(PartnerJDBCDataSink.scala:275)
    at com.amazonaws.services.glue.marketplace.partner.PartnerJDBCRecordWriter.writePartition(PartnerJDBCDataSink.scala:163)
    at com.amazonaws.services.glue.marketplace.connector.GlueCustomDataSink.$anonfun$defaultWriteDynamicFrame$1(CustomDataSink.scala:82)
    at com.amazonaws.services.glue.marketplace.connector.GlueCustomDataSink.$anonfun$defaultWriteDynamicFrame$1$adapted(CustomDataSink.scala:71)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:138)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1516)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
Dmytro Mitin
  • 48,194
  • 3
  • 28
  • 66
Santhosha
  • 81
  • 2
  • 7

0 Answers0