1

I was using jdbc sink driver from kafka connect. it allows create table with one primary key when I try to add the 2 pk.key fields . it gives me error:

java.lang.NullPointerException
        at io.confluent.connect.jdbc.util.TableDefinitions.refresh(TableDefinitions.java:86)
        at io.confluent.connect.jdbc.sink.DbStructure.createOrAmendIfNecessary(DbStructure.java:65)
        at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:85)
        at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
        at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)

worked with primary key

  • Can you share your Kafka Connect config please – Robin Moffatt Apr 30 '19 at 20:01
  • "connector.class": "io.confluent.x.JdbcSinkxx","connection.url": "jdbc:sqlserver:/xx:1433;databaseName=xxxx","tasks.max":"1","topics":"x","connection.user": "x","connection.password": "xxx","dialect.name": "SqlxDialect", "auto.create" : "true","auto.evolve":"true","transforms":"xx", "transforms.xx.type":"org.apache.kafka.connect.transforms.TimestampConverter$Value","transforms.xx.target.type":"Timestamp","transforms.xx.field": "x", "transforms.xx.format": "yyyy-xxxS","pk.mode" :"record_value","insert.mode" : "upsert","pk.fields" : "XY,CV,VF", "batch.size": "3000" – Kumar Pinumalla Apr 30 '19 at 22:34
  • @KumarPinumalla, What is your jdbc-connector version? – Bartosz Wardziński May 01 '19 at 09:13
  • it says kafka-connect-jdbc-5.2.1.jar – Kumar Pinumalla May 01 '19 at 13:55

1 Answers1

0
My kafka connect configuration    
bootstrap.servers=localhost:9092
    group.id=connect-cluster
    key.converter=io.confluent.connect.avro.AvroConverter
    key.converter.schema.registry.url=http://localhost:8081
    value.converter=io.confluent.connect.avro.AvroConverter
    value.converter.schema.registry.url=http://localhost:8081
    avro.compatibility.level=none
    auto.register.schemas=true
    config.storage.topic=connect-configs
    offset.storage.topic=connect-offsets
    status.storage.topic=connect-statuses
    config.storage.replication.factor=1
    offset.storage.replication.factor=1
    status.storage.replication.factor=1
    internal.key.converter=org.apache.kafka.connect.json.JsonConverter
    internal.value.converter=org.apache.kafka.connect.json.JsonConverter
    internal.key.converter.schemas.enable=false
    internal.value.converter.schemas.enable=false
    rest.host.name=kafka01.xxxxxxxxx.com
    rest.port=8083
    plugin.path=xxx/kafka/confluent-5.2.1/share/java,xxxx/kafka/confluent-5.2.1/share/java
    producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
    consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor