We are facing issue with our new connector, we want to capture a change event for mysql table, but we are noticing the error with trace
org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped.\n\tat io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:369)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.lambda$execute$25(MySqlStreamingChangeEventSource.java:860)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1125)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:973)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:599)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:857)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: io.debezium.DebeziumException: Error processing binlog event\n\t... 7 more\nCaused by: io.debezium.DebeziumException: Encountered change event for table tablename whose schema isn't known to this connector\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.informAboutUnknownTableIfRequired(MySqlStreamingChangeEventSource.java:654)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleUpdateTableMetadata(MySqlStreamingChangeEventSource.java:633)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.lambda$execute$13(MySqlStreamingChangeEventSource.java:831)\n\tat io.debezium.connector.mysql.MySqlStreamingChangeEventSource.handleEvent(MySqlStreamingChangeEventSource.java:349
We are anticipating that after downgrading from mysql version 8.0 to 5.7 we have started facing this issue. We tried deleting the history topic and snapshot mode configs but aren't able to solve it.
The properties are as follows :
{
"name": "speed-account-table-v3",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"snapshot.locking.mode": "none",
"topic.creation.default.partitions": "1",
"tasks.max": "1",
"database.history.consumer.sasl.jaas.config": "jass config",
"database.history.kafka.topic": "speed-history.speed-account-table-new",
"bootstrap.servers": "cluster name",
"database.history.consumer.security.protocol": "SASL_SSL",
"tombstones.on.delete": "true",
"snapshot.new.tables": "parallel",
"topic.creation.default.replication.factor": "2",
"database.history.skip.unparseable.ddl": "true",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"database.allowPublicKeyRetrieval": "true",
"database.history.producer.sasl.mechanism": "SCRAM-SHA-512",
"database.user": "username",
"database.server.id": "server id",
"database.history.producer.security.protocol": "SASL_SSL",
"database.history.kafka.bootstrap.servers": "cluster name",
"database.server.name": "speed-account-v3",
"database.port": "portnumber",
"key.converter.schemas.enable": "false",
"value.converter.schema.registry.url": "xxxx",
"database.hostname": "xxxxxx",
"database.password": "xxxxx",
"value.converter.schemas.enable": "false",
"name": "speed-account-table-v3",
"table.include.list": "speed.tbl_account",
"database.history.consumer.sasl.mechanism": "SCRAM-SHA-512",
"snapshot.mode": "initial",
"database.include.list": "speed"
}
}
We have tried changing our infrastructure by using a different kafka connect as well as completely different MSK, but failed.