I have a glue ETL job which write data to an onpremise postgreSql database. I'm unable to find an effective option within glue methods to read the data from same database using the jdbc connection.
Below is the existing approach:
- Reads data from S3(csv files)- using crawler- able to see the data in data catalog.
- Created a glue data catalog connection "connection-on-premise-postgre" with jdbc url, user name, password and other required configurations.
- Glue ETL jobv to load the data from catalog tables to onpremise postgresql database tables
#### context creation and other preceeding stuff ###########
datasource0 = glueContext.create_dynamic_frame.from_catalog(database="default", table_name="my_data")
output_data = glueContext.write_dynamic_frame.from_jdbc_conf(frame=datasource0,
catalog_connection="connection-on-premise-postgre",
connection_options={"database": "my_db",
"dbtable": "my_table"},
redshift_tmp_dir=args["TempDir"],
transformation_ctx="output_data")
i need to read data from a another onpremise table to AWS glue job using the same connection used above "connection-on-premise-postgre"
please let me know the way to do this. I tried creating dataframe using options but unable to configure the parameters properly.