I have been using record format conversion in kinesis firehose for converting the files in parquet format in S3 where in the schema that I have is being stored in AWS Glue. I am struggling in an issue where I am unable to configure the timestamp format that should be present when this data will be dumped to S3 bucket.
There is a field in the overall record which is a timestamp field(epoch value in seconds) which is being passed to kinesis and during record format conversion, I have used OpenX JSON SerDe because it supports epoch value in seconds. After this conversion, the schema that has been picked has field of the type glue.schema.TIMESTAMP
. Now, when I use S3 select to query the data in this parquet files, the timestamp value is getting converted to format BigInt, i.e. 4.5372829208888797985120256e+25
, whereas I need the timestamp in the format YYYY-MM-DD HH:mm:ss
.