3

I'm using kafka JDBC connector to push data from kafka topic to postgres on aws. However, when a bad message occurs, for example, the data type in the message is different from what is defined in the database, I get error message:

ERROR WorkerSinkTask{id=postgres_sink -0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. (org.apache.kafka.connect.runtime.WorkerSinkTask:584)

Then I have to delete my jdbc sink connector (pod) manually and let it start a new one. Since I am running in a standalone mode, I need to prevent the connector (pod) from failing or restart automatically because an insert failed.

Sophie
  • 119
  • 2
  • 6

1 Answers1

0

I think we can apply deadletter queue and set error.tolerances = all to push the bad message to another topic then we can review later.

The more detail in this link