1

I Have implemented Dead Letter Queues error handling in Kafka. It works and the data are sent to DLQ topics. I am not understanding what types of data got routed in DLQ topics. This is my DLQ topics data

And this is the normal data that got sunk

1st picture is the data that got routed into DLQ Topics and the second one is the normal data that got sunk into databases. Does anyone have any idea how does that key got changed as I have used id as a key?

Here is my source and sink properties:

    "name": "jdbc_source_postgresql_analytics",
    "config": {
        "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
        "connection.url": "jdbc:postgresql://192.168.5.40:5432/abc",
        "connection.user": "abc",
        "connection.password": "********",
        "topic.prefix": "test_",
        "mode": "timestamp+incrementing",
        "incrementing.column.name": "id",
        "timestamp.column.name": "updatedAt",
        "validate.non.null": true,
        "table.whitelist": "test",
        "key.converter": "org.apache.kafka.connect.converters.IntegerConverter",
        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
        "key.converter.schemas.enable": false,
        "value.converter.schemas.enable": false,
        "catalog.pattern": "public",
        "transforms": "createKey,extractInt",
        "transforms.createKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
        "transforms.createKey.fields": "id",
        "transforms.extractInt.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
        "transforms.extractInt.field": "id",
        "errors.tolerance": "all"

    }
}

sink properties: 
{
    "name": "es_sink_analytics",
    "config": {
        "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
        "type.name": "_doc",
        "key.converter.schemas.enable": "false",
        "topics": "TEST",
        "topic.index.map": "TEST:te_test",
        "value.converter.schemas.enable": "false",
        "connection.url": "http://192.168.10.40:9200",
        "connection.username": "******",
        "connection.password": "********",
        "key.ignore": "false",
        "errors.tolerance": "all",
        "errors.deadletterqueue.topic.name": "dlq-error-es",
        "errors.deadletterqueue.topic.replication.factor": "1",
        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
        "key.converter": "org.apache.kafka.connect.converters.IntegerConverter",
        "schema.ignore": "true",
        "error.tolerance":"all"
    }
}
Sajita
  • 41
  • 5
  • 1
    You haven't added any code to understand how you implement DLQ. The fact that you have keys set means you set them explicitly in your code. So obviously you have forgot to configure your key.serializer property for DLQ. Please add your producer configs for both the normal flow and the DLQ. – yuranos Dec 31 '20 at 17:33
  • @yuranos I have edited and provided the source and sink configuration – Sajita Jan 01 '21 at 03:24
  • Ca you try to read from DLQ topic with a regular console command, like kafka-console-consumer --bootstrap-server your_bootstrap_server:9092 --max-messages 10 --property print.key=true --property key.deserializer=org.apache.kafka.common.serialization.LongDeserializer --from-beginning --topic your_topic – yuranos Jan 03 '21 at 18:03
  • @yuranos yes I tried it and got another error : [2021-01-15 11:46:09,842] ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$) org.apache.kafka.common.errors.SerializationException: Size of data received by LongDeserializer is not 8 – Sajita Jan 15 '21 at 06:02

0 Answers0