need little help to find better solution for my use case below.
I have S3 bucket which contain input Data and it is encrypted with KMS KEY 1
so I am able to set the KMS KEY 1 to my spark session using "spark.hadoop.fs.s3.serverSideEncryption.kms.keyId"
and able to read the data,
now I want to write the data to another S3 bucket but it is encrypted with KMS KEY 2*
so what I am currently doing is, creating spark session with Key1 and read the data frame and convert that into Pandas data frame and kill the spark session and recreate the spark session with in same AWS glue job with KMS KEY2 and converting the pandas data which was created in previous step in to spark data frame and writing to output S3 bucket.
but this approach is causing datatype issues sometimes. is there any better alternate solution available to handle this use case ?
thanks in advance and your help is greatly appreciated.