I am using Docker. I am new to Kafka connect. My use case is such that I have a Postgres database from where I need to capture any Change Event (INSERT-UPDATE-DELETE) on the Kafka topic and process it further. But I am stuck in capturing the CHANGE event. I am following the below link:
Just after the creation of Connector with the below configuration:
{"name": "postgres-source",
"config": {"connector.class":"io.debezium.connector.postgresql.PostgresConnector",
"tasks.max":"1",
"database.hostname": "postgres",
"database.port": "5432",
"database.user": "postgres",
"database.password": "postgres",
"database.dbname" : "students",
"database.server.name": "dbserver15",
"database.whitelist": "students",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "schema-changes.students",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "true",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.UnwrapFromEnvelope"
}
}
I am using the below command to capture the snapshot/change event in the database:
kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic dbserver15.public.admission
It shows the data in this format: Struct{student_id=1,gre=337,toefl=118}...
But as soon as I do any INSERT-UPDATE-DELETE action on this table, below error is thrown by Kafka-Connector:
org.apache.kafka.connect.errors.ConnectException: An exception ocurred in the change event producer. This connector will be stopped.
at io.debezium.connector.base.ChangeEventQueue.throwProducerFailureIfPresent(ChangeEventQueue.java:170)\n\tat io.debezium.connector.base.ChangeEventQueue.poll(ChangeEventQueue.java:151)
at io.debezium.connector.postgresql.PostgresConnectorTask.poll(PostgresConnectorTask.java:156)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:244)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:220)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)\nCaused by: java.lang.IllegalArgumentException: Invalid identifier:
at io.debezium.relational.TableIdParser$TableIdTokenizer.tokenize(TableIdParser.java:68)
at io.debezium.text.TokenStream.start(TokenStream.java:445)
at io.debezium.relational.TableIdParser.parse(TableIdParser.java:28)
at io.debezium.relational.TableId.parse(TableId.java:39)\n\tat io.debezium.connector.postgresql.PostgresSchema.parse(PostgresSchema.java:218)
at io.debezium.connector.postgresql.RecordsStreamProducer.process(RecordsStreamProducer.java:238)
at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$streamChanges$1(RecordsStreamProducer.java:131)
at io.debezium.connector.postgresql.connection.pgproto.PgProtoMessageDecoder.processMessage(PgProtoMessageDecoder.java:48)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:265)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.read(PostgresReplicationConnection.java:250)
at io.debezium.connector.postgresql.RecordsStreamProducer.streamChanges(RecordsStreamProducer.java:131)
at io.debezium.connector.postgresql.RecordsStreamProducer.lambda$start$0(RecordsStreamProducer.java:117)
... 5 more
Below are the solutions, I have looked into:
https://gitter.im/debezium/user?at=5e1f2846be66165ecbd4e0fe
https://debezium.io/documentation/reference/connectors/postgresql.html#postgresql-when-things-go-wrong
which says, snapshot.mode is set to exported, which allows the connector to perform a lock-free snapshot.
but when I add "snapshot.mode"="exported" an error is thrown stating,
The 'snapshot.mode' value 'exported' is invalid: Value must be one of always, never, initial_only, initial, custom
Can somebody be little more elaborate and explain me what am I missing. I guess its something related to configuration.