I have some dataframes with various columns and rows pulled from some worksheets in google sheets using pygsheets and from postgres tables in several databases, trying to write these on s3 buckets using awswrangler.
For most of them I don't have to explicit the type of columns, but some require.
But sometimes I get returned an error like this
awswrangler.s3.to_parquet: Could not convert X with type Y: tried to convert to Z when saving table A
And I have no idea from which column is the value X from so I can properly type it and pass on to awswrangler.s3.to_parquet()
using the parameter dtype
.
Is there a way to do this or it's just not a feature from this lib?
Something similar to what I wanted:
awswrangler.s3.to_parquet: Could not convert X from column B with type Y: tried to convert to Z when saving table A