0

I have a running Kafka Connect instance and have submitted my connector with the following configuration at the bottom of this post.

Question

The Debezium docs seem to indicate I set database.server.name=connect_test and create topics for each table I want to ingest into kafka.. So for my table, I'd create connect_test-TEST_Test_Table_Object.

I don't get any errors, but no data is ingested into Kafka. I do see some warnings about configs, but Im just trying to get a very basic test up..

Can anyone proivide any insight?

I've also pre-created the following topics:

  1. connect-configs (1 partition)
  2. connect-offsets (3 partitions)
  3. connect-status (3 partitions)
  4. schema_changes-connect_test (3 partitions)
  5. connect_test-TEST_Test_Table_Object (3 partitions)
    {
        "name": "sql-server-source-connector",
        "config": {
            "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
            "database.hostname": "redacted.public.redacted.database.windows.net",
            "database.port": "3342",
            "database.user": "db_user",
            "database.password": "password",
            "database.dbname": "TEST_DB",
            "database.server.name": "connect_test",
            "database.history.kafka.bootstrap.servers": "kafka-url-1:9096,kafka-url-2:9096,kafka-url-3:9096",
            "database.history.kafka.topic": "schema_changes-connect_test",
            "table.include.list": "TEST_Test_Table_Object",
            "database.history.producer.security.protocol": "SSL",
            "database.history.producer.ssl.keystore.location": "/app/.keystore.jks",
            "database.history.producer.ssl.keystore.password": "password",
            "database.history.producer.ssl.truststore.location": "/app/.truststore.jks",
            "database.history.producer.ssl.truststore.password": "password",
            "database.history.producer.ssl.key.password": "password",
            "database.history.consumer.security.protocol": "SSL",
            "database.history.consumer.ssl.keystore.location": "/app/.keystore.jks",
            "database.history.consumer.ssl.keystore.password": "password",
            "database.history.consumer.ssl.truststore.password": "/app/.truststore.jks",
            "database.history.consumer.ssl.key.password": "password"
        }
    }

I keep seeing Failed to construct kafka producer ... caused by: Failed to load SSL keystore /app/.keystore.jks of type JKS ... failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded

I'm using Heroku Kafka and I have three certs: client_cert.pem, client_key.pem, trusted_cert.pem

I use keytool to turn my .pem's into /app/.keystore.jks and /app/.truststore.jks

Logs.. Some WARN redacted for size

2022-06-06T21:11:16.115619+00:00 app[web.3]: [2022-06-06 21:11:16,115] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Cluster ID: some-id (org.apache.kafka.clients.Metadata)
2022-06-06T21:11:16.116945+00:00 app[web.3]: [2022-06-06 21:11:16,116] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Subscribed to partition(s): connect-status-0, connect-status-2, connect-status-1 (org.apache.kafka.clients.consumer.KafkaConsumer)
2022-06-06T21:11:16.117022+00:00 app[web.3]: [2022-06-06 21:11:16,116] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Seeking to EARLIEST offset of partition connect-status-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.117070+00:00 app[web.3]: [2022-06-06 21:11:16,117] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Seeking to EARLIEST offset of partition connect-status-2 (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.117092+00:00 app[web.3]: [2022-06-06 21:11:16,117] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Seeking to EARLIEST offset of partition connect-status-1 (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:15.127922+00:00 app[web.3]: [2022-06-06 21:11:15,124] INFO [Producer clientId=producer-1] Cluster ID: some-id (org.apache.kafka.clients.Metadata)
2022-06-06T21:11:16.441247+00:00 app[web.3]: [2022-06-06 21:11:16,439] INFO [Producer clientId=producer-3] Cluster ID: some-id (org.apache.kafka.clients.Metadata)
2022-06-06T21:11:16.579714+00:00 app[web.3]: [2022-06-06 21:11:16,577] WARN The configuration 'log4j.loggers' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)

# WARN redacted for size limits

2022-06-06T21:11:16.580291+00:00 app[web.3]: [2022-06-06 21:11:16,580] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
2022-06-06T21:11:16.580291+00:00 app[web.3]: [2022-06-06 21:11:16,580] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
2022-06-06T21:11:16.580315+00:00 app[web.3]: [2022-06-06 21:11:16,580] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
2022-06-06T21:11:16.580348+00:00 app[web.3]: [2022-06-06 21:11:16,580] WARN The configuration 'log4j.root.loglevel' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
2022-06-06T21:11:16.580414+00:00 app[web.3]: [2022-06-06 21:11:16,580] INFO Kafka version: 6.1.4-ccs (org.apache.kafka.common.utils.AppInfoParser)
2022-06-06T21:11:16.580457+00:00 app[web.3]: [2022-06-06 21:11:16,580] INFO Kafka commitId: c9124241a6ff43bc (org.apache.kafka.common.utils.AppInfoParser)
2022-06-06T21:11:16.580479+00:00 app[web.3]: [2022-06-06 21:11:16,580] INFO Kafka startTimeMs: 1654549876580 (org.apache.kafka.common.utils.AppInfoParser)
2022-06-06T21:11:16.607720+00:00 app[web.3]: [2022-06-06 21:11:16,607] INFO [Consumer clientId=consumer-connect-demo-group-3, groupId=connect-demo-group] Cluster ID: someId (org.apache.kafka.clients.Metadata)
2022-06-06T21:11:16.608322+00:00 app[web.3]: [2022-06-06 21:11:16,608] INFO [Consumer clientId=consumer-connect-demo-group-3, groupId=connect-demo-group] Subscribed to partition(s): connect-configs-0 (org.apache.kafka.clients.consumer.KafkaConsumer)
2022-06-06T21:11:16.608416+00:00 app[web.3]: [2022-06-06 21:11:16,608] INFO [Consumer clientId=consumer-connect-demo-group-3, groupId=connect-demo-group] Seeking to EARLIEST offset of partition connect-configs-0 (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.658870+00:00 app[web.3]: [2022-06-06 21:11:16,658] INFO [Consumer clientId=consumer-connect-demo-group-3, groupId=connect-demo-group] Resetting offset for partition connect-configs-0 to position FetchPosition{offset=20, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka-url-1:9096 (id: 1 rack: us-east-1a)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.164555+00:00 app[web.3]: [2022-06-06 21:11:16,163] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Resetting offset for partition connect-status-2 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka-url-2:9096 (id: 2 rack: us-east-1b)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.189017+00:00 app[web.3]: [2022-06-06 21:11:16,188] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Resetting offset for partition connect-status-1 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka-url-1:9096 (id: 1 rack: us-east-1a)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.238074+00:00 app[web.3]: [2022-06-06 21:11:16,237] INFO [Consumer clientId=consumer-connect-demo-group-2, groupId=connect-demo-group] Resetting offset for partition connect-status-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka-url-3:9096 (id: 0 rack: us-east-1c)], epoch=0}}. (org.apache.kafka.clients.consumer.internals.SubscriptionState)
2022-06-06T21:11:16.252558+00:00 app[web.3]: [2022-06-06 21:11:16,252] INFO ProducerConfig values:
2022-06-06T21:11:16.252560+00:00 app[web.3]: acks = -1
2022-06-06T21:11:16.252561+00:00 app[web.3]: batch.size = 16384
2022-06-06T21:11:16.252562+00:00 app[web.3]: bootstrap.servers = [kafka-url-2:9096, kafka-url-1:9096, kafka-url-3:9096]
2022-06-06T21:11:16.252562+00:00 app[web.3]: buffer.memory = 33554432
2022-06-06T21:11:16.252563+00:00 app[web.3]: client.dns.lookup = use_all_dns_ips
2022-06-06T21:11:16.252563+00:00 app[web.3]: client.id = producer-3
2022-06-06T21:11:16.252564+00:00 app[web.3]: compression.type = none
2022-06-06T21:11:16.252564+00:00 app[web.3]: connections.max.idle.ms = 540000
2022-06-06T21:11:16.252564+00:00 app[web.3]: delivery.timeout.ms = 2147483647
2022-06-06T21:11:16.252564+00:00 app[web.3]: enable.idempotence = false
2022-06-06T21:11:16.252565+00:00 app[web.3]: interceptor.classes = []
2022-06-06T21:11:16.252565+00:00 app[web.3]: internal.auto.downgrade.txn.commit = false
2022-06-06T21:11:16.252566+00:00 app[web.3]: key.serializer = class org.apache.kafka.common.serialization.StringSerializer
2022-06-06T21:11:16.252566+00:00 app[web.3]: linger.ms = 0
2022-06-06T21:11:16.252566+00:00 app[web.3]: max.block.ms = 60000
2022-06-06T21:11:16.252567+00:00 app[web.3]: max.in.flight.requests.per.connection = 1
2022-06-06T21:11:16.252567+00:00 app[web.3]: max.request.size = 1048576
2022-06-06T21:11:16.252567+00:00 app[web.3]: metadata.max.age.ms = 300000
2022-06-06T21:11:16.252567+00:00 app[web.3]: metadata.max.idle.ms = 300000
2022-06-06T21:11:16.252567+00:00 app[web.3]: metric.reporters = []
2022-06-06T21:11:16.252568+00:00 app[web.3]: metrics.num.samples = 2
2022-06-06T21:11:16.252568+00:00 app[web.3]: metrics.recording.level = INFO
2022-06-06T21:11:16.252568+00:00 app[web.3]: metrics.sample.window.ms = 30000
2022-06-06T21:11:16.252569+00:00 app[web.3]: partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
2022-06-06T21:11:16.252569+00:00 app[web.3]: receive.buffer.bytes = 32768
2022-06-06T21:11:16.252569+00:00 app[web.3]: reconnect.backoff.max.ms = 1000
2022-06-06T21:11:16.252570+00:00 app[web.3]: reconnect.backoff.ms = 50
2022-06-06T21:11:16.252570+00:00 app[web.3]: request.timeout.ms = 30000
2022-06-06T21:11:16.252570+00:00 app[web.3]: retries = 2147483647
2022-06-06T21:11:16.252570+00:00 app[web.3]: retry.backoff.ms = 100
2022-06-06T21:11:16.252571+00:00 app[web.3]: sasl.client.callback.handler.class = null
2022-06-06T21:11:16.252571+00:00 app[web.3]: sasl.jaas.config = null
2022-06-06T21:11:16.252571+00:00 app[web.3]: sasl.kerberos.kinit.cmd = /usr/bin/kinit
2022-06-06T21:11:16.252572+00:00 app[web.3]: sasl.kerberos.min.time.before.relogin = 60000
2022-06-06T21:11:16.252572+00:00 app[web.3]: sasl.kerberos.service.name = null
2022-06-06T21:11:16.252572+00:00 app[web.3]: sasl.kerberos.ticket.renew.jitter = 0.05
2022-06-06T21:11:16.252573+00:00 app[web.3]: sasl.kerberos.ticket.renew.window.factor = 0.8
2022-06-06T21:11:16.252573+00:00 app[web.3]: sasl.login.callback.handler.class = null
2022-06-06T21:11:16.252573+00:00 app[web.3]: sasl.login.class = null
2022-06-06T21:11:16.252574+00:00 app[web.3]: sasl.login.refresh.buffer.seconds = 300
2022-06-06T21:11:16.252574+00:00 app[web.3]: sasl.login.refresh.min.period.seconds = 60
2022-06-06T21:11:16.252574+00:00 app[web.3]: sasl.login.refresh.window.factor = 0.8
2022-06-06T21:11:16.252574+00:00 app[web.3]: sasl.login.refresh.window.jitter = 0.05
2022-06-06T21:11:16.252575+00:00 app[web.3]: sasl.mechanism = GSSAPI
2022-06-06T21:11:16.252575+00:00 app[web.3]: security.protocol = SSL
2022-06-06T21:11:16.252575+00:00 app[web.3]: security.providers = null
2022-06-06T21:11:16.252575+00:00 app[web.3]: send.buffer.bytes = 131072
2022-06-06T21:11:16.252576+00:00 app[web.3]: socket.connection.setup.timeout.max.ms = 127000
2022-06-06T21:11:16.252576+00:00 app[web.3]: socket.connection.setup.timeout.ms = 10000
2022-06-06T21:11:16.252576+00:00 app[web.3]: ssl.cipher.suites = null
2022-06-06T21:11:16.252577+00:00 app[web.3]: ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
2022-06-06T21:11:16.252577+00:00 app[web.3]: ssl.endpoint.identification.algorithm =
2022-06-06T21:11:16.252577+00:00 app[web.3]: ssl.engine.factory.class = null
2022-06-06T21:11:16.252578+00:00 app[web.3]: ssl.key.password = [hidden]
2022-06-06T21:11:16.252578+00:00 app[web.3]: ssl.keymanager.algorithm = SunX509
2022-06-06T21:11:16.252578+00:00 app[web.3]: ssl.keystore.certificate.chain = null
2022-06-06T21:11:16.252578+00:00 app[web.3]: ssl.keystore.key = null
2022-06-06T21:11:16.252579+00:00 app[web.3]: ssl.keystore.location = /app/.keystore.jks
2022-06-06T21:11:16.252579+00:00 app[web.3]: ssl.keystore.password = [hidden]
2022-06-06T21:11:16.252579+00:00 app[web.3]: ssl.keystore.type = JKS
2022-06-06T21:11:16.252580+00:00 app[web.3]: ssl.protocol = SSL
2022-06-06T21:11:16.252580+00:00 app[web.3]: ssl.provider = null
2022-06-06T21:11:16.252580+00:00 app[web.3]: ssl.secure.random.implementation = null
2022-06-06T21:11:16.252580+00:00 app[web.3]: ssl.trustmanager.algorithm = PKIX
2022-06-06T21:11:16.252581+00:00 app[web.3]: ssl.truststore.certificates = null
2022-06-06T21:11:16.252581+00:00 app[web.3]: ssl.truststore.location = /app/.truststore.jks
2022-06-06T21:11:16.252581+00:00 app[web.3]: ssl.truststore.password = [hidden]
2022-06-06T21:11:16.252582+00:00 app[web.3]: ssl.truststore.type = JKS
2022-06-06T21:11:16.252582+00:00 app[web.3]: transaction.timeout.ms = 60000
2022-06-06T21:11:16.252582+00:00 app[web.3]: transactional.id = null
2022-06-06T21:11:16.252582+00:00 app[web.3]: value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2022-06-06T21:11:16.252583+00:00 app[web.3]: (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391195+00:00 app[web.3]: [2022-06-06 21:11:16,390] WARN The configuration 'log4j.loggers' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391271+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391272+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'rest.advertised.port' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391272+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391273+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'status.storage.partitions' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391273+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'metrics.context.connect.kafka.cluster.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391304+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'offset.storage.partitions' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391812+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'topic.creation.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391813+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391855+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'config.storage.partitions' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391894+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391894+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391918+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391963+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.391988+00:00 app[web.3]: [2022-06-06 21:11:16,391] WARN The configuration 'log4j.root.loglevel' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
2022-06-06T21:11:16.392047+00:00 app[web.3]: [2022-06-06 21:11:16,392] INFO Kafka version: 6.1.4-ccs (org.apache.kafka.common.utils.AppInfoParser)
2022-06-06T21:11:16.392077+00:00 app[web.3]: [2022-06-06 21:11:16,392] INFO Kafka commitId: c9124241a6ff43bc (org.apache.kafka.common.utils.AppInfoParser)
2022-06-06T21:11:16.392131+00:00 app[web.3]: [2022-06-06 21:11:16,392] INFO Kafka startTimeMs: 1654549876391 (org.apache.kafka.common.utils.AppInfoParser)
2022-06-06T21:11:16.401532+00:00 app[web.3]: [2022-06-06 21:11:16,401] INFO ConsumerConfig values:
2022-06-06T21:11:16.401533+00:00 app[web.3]: allow.auto.create.topics = true
2022-06-06T21:11:16.401534+00:00 app[web.3]: auto.commit.interval.ms = 5000
2022-06-06T21:11:16.401534+00:00 app[web.3]: auto.offset.reset = earliest
2022-06-06T21:11:16.401535+00:00 app[web.3]: bootstrap.servers = [kafka-url-2:9096, kafka-url-1:9096, kafka-url-3:9096]
2022-06-06T21:11:16.401536+00:00 app[web.3]: check.crcs = true
2022-06-06T21:11:16.401536+00:00 app[web.3]: client.dns.lookup = use_all_dns_ips
2022-06-06T21:11:16.401536+00:00 app[web.3]: client.id = consumer-connect-demo-group-3
2022-06-06T21:11:16.401537+00:00 app[web.3]: client.rack =
2022-06-06T21:11:16.401537+00:00 app[web.3]: connections.max.idle.ms = 540000
2022-06-06T21:11:16.401537+00:00 app[web.3]: default.api.timeout.ms = 60000
2022-06-06T21:11:16.401538+00:00 app[web.3]: enable.auto.commit = false
2022-06-06T21:11:16.401538+00:00 app[web.3]: exclude.internal.topics = true
2022-06-06T21:11:16.401538+00:00 app[web.3]: fetch.max.bytes = 52428800
2022-06-06T21:11:16.401538+00:00 app[web.3]: fetch.max.wait.ms = 500
2022-06-06T21:11:16.401539+00:00 app[web.3]: fetch.min.bytes = 1
2022-06-06T21:11:16.401539+00:00 app[web.3]: group.id = connect-demo-group
2022-06-06T21:11:16.401539+00:00 app[web.3]: group.instance.id = null
2022-06-06T21:11:16.401540+00:00 app[web.3]: heartbeat.interval.ms = 3000
2022-06-06T21:11:16.401540+00:00 app[web.3]: interceptor.classes = []
2022-06-06T21:11:16.401540+00:00 app[web.3]: internal.leave.group.on.close = true
2022-06-06T21:11:16.401541+00:00 app[web.3]: internal.throw.on.fetch.stable.offset.unsupported = false
2022-06-06T21:11:16.401541+00:00 app[web.3]: isolation.level = read_uncommitted
2022-06-06T21:11:16.401541+00:00 app[web.3]: key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2022-06-06T21:11:16.401542+00:00 app[web.3]: max.partition.fetch.bytes = 1048576
2022-06-06T21:11:16.401542+00:00 app[web.3]: max.poll.interval.ms = 300000
2022-06-06T21:11:16.401542+00:00 app[web.3]: max.poll.records = 500
2022-06-06T21:11:16.401542+00:00 app[web.3]: metadata.max.age.ms = 300000
2022-06-06T21:11:16.401543+00:00 app[web.3]: metric.reporters = []
2022-06-06T21:11:16.401544+00:00 app[web.3]: metrics.num.samples = 2
2022-06-06T21:11:16.401544+00:00 app[web.3]: metrics.recording.level = INFO
2022-06-06T21:11:16.401544+00:00 app[web.3]: metrics.sample.window.ms = 30000
2022-06-06T21:11:16.401544+00:00 app[web.3]: partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
2022-06-06T21:11:16.401545+00:00 app[web.3]: receive.buffer.bytes = 65536
2022-06-06T21:11:16.401545+00:00 app[web.3]: reconnect.backoff.max.ms = 1000
2022-06-06T21:11:16.401545+00:00 app[web.3]: reconnect.backoff.ms = 50
2022-06-06T21:11:16.401546+00:00 app[web.3]: request.timeout.ms = 30000
2022-06-06T21:11:16.401546+00:00 app[web.3]: retry.backoff.ms = 100
2022-06-06T21:11:16.401546+00:00 app[web.3]: sasl.client.callback.handler.class = null
2022-06-06T21:11:16.401546+00:00 app[web.3]: sasl.jaas.config = null
2022-06-06T21:11:16.401547+00:00 app[web.3]: sasl.kerberos.kinit.cmd = /usr/bin/kinit
2022-06-06T21:11:16.401547+00:00 app[web.3]: sasl.kerberos.min.time.before.relogin = 60000
2022-06-06T21:11:16.401547+00:00 app[web.3]: sasl.kerberos.service.name = null
2022-06-06T21:11:16.401548+00:00 app[web.3]: sasl.kerberos.ticket.renew.jitter = 0.05
2022-06-06T21:11:16.401548+00:00 app[web.3]: sasl.kerberos.ticket.renew.window.factor = 0.8
2022-06-06T21:11:16.401548+00:00 app[web.3]: sasl.login.callback.handler.class = null
2022-06-06T21:11:16.401548+00:00 app[web.3]: sasl.login.class = null
2022-06-06T21:11:16.401549+00:00 app[web.3]: sasl.login.refresh.buffer.seconds = 300
2022-06-06T21:11:16.401549+00:00 app[web.3]: sasl.login.refresh.min.period.seconds = 60
2022-06-06T21:11:16.401549+00:00 app[web.3]: sasl.login.refresh.window.factor = 0.8
2022-06-06T21:11:16.401550+00:00 app[web.3]: sasl.login.refresh.window.jitter = 0.05
2022-06-06T21:11:16.401550+00:00 app[web.3]: sasl.mechanism = GSSAPI
2022-06-06T21:11:16.401550+00:00 app[web.3]: security.protocol = SSL
2022-06-06T21:11:16.401551+00:00 app[web.3]: security.providers = null
2022-06-06T21:11:16.401551+00:00 app[web.3]: send.buffer.bytes = 131072
2022-06-06T21:11:16.401551+00:00 app[web.3]: session.timeout.ms = 10000
2022-06-06T21:11:16.401551+00:00 app[web.3]: socket.connection.setup.timeout.max.ms = 127000
2022-06-06T21:11:16.401552+00:00 app[web.3]: socket.connection.setup.timeout.ms = 10000
2022-06-06T21:11:16.401558+00:00 app[web.3]: ssl.cipher.suites = null
2022-06-06T21:11:16.401558+00:00 app[web.3]: ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
2022-06-06T21:11:16.401559+00:00 app[web.3]: ssl.endpoint.identification.algorithm =
2022-06-06T21:11:16.401559+00:00 app[web.3]: ssl.engine.factory.class = null
2022-06-06T21:11:16.401559+00:00 app[web.3]: ssl.key.password = [hidden]
2022-06-06T21:11:16.401560+00:00 app[web.3]: ssl.keymanager.algorithm = SunX509
2022-06-06T21:11:16.401560+00:00 app[web.3]: ssl.keystore.certificate.chain = null
2022-06-06T21:11:16.401560+00:00 app[web.3]: ssl.keystore.key = null
2022-06-06T21:11:16.401561+00:00 app[web.3]: ssl.keystore.location = /app/.keystore.jks
2022-06-06T21:11:16.401561+00:00 app[web.3]: ssl.keystore.password = [hidden]
2022-06-06T21:11:16.401561+00:00 app[web.3]: ssl.keystore.type = JKS
2022-06-06T21:11:16.401561+00:00 app[web.3]: ssl.protocol = SSL
2022-06-06T21:11:16.401561+00:00 app[web.3]: ssl.provider = null
2022-06-06T21:11:16.401562+00:00 app[web.3]: ssl.secure.random.implementation = null
2022-06-06T21:11:16.401562+00:00 app[web.3]: ssl.trustmanager.algorithm = PKIX
2022-06-06T21:11:16.401562+00:00 app[web.3]: ssl.truststore.certificates = null
2022-06-06T21:11:16.401563+00:00 app[web.3]: ssl.truststore.location = /app/.truststore.jks
2022-06-06T21:11:16.401563+00:00 app[web.3]: ssl.truststore.password = [hidden]
2022-06-06T21:11:16.401563+00:00 app[web.3]: ssl.truststore.type = JKS
2022-06-06T21:11:16.401563+00:00 app[web.3]: value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2022-06-06T21:11:16.401564+00:00 app[web.3]: (org.apache.kafka.clients.consumer.ConsumerConfig)
2022-06-06T21:11:16.730633+00:00 app[web.3]: [2022-06-06 21:11:16,730] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Cluster ID: someid (org.apache.kafka.clients.Metadata)
2022-06-06T21:11:16.732682+00:00 app[web.3]: [2022-06-06 21:11:16,732] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Discovered group coordinator kafka-url-1:9096 (id: 2147483646 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:16.736604+00:00 app[web.3]: [2022-06-06 21:11:16,736] INFO [Worker clientId=connect-1, groupId=connect-demo-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:16.765579+00:00 app[web.3]: [2022-06-06 21:11:16,765] INFO [Worker clientId=connect-1, groupId=connect-demo-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.337307+00:00 app[web.2]: [2022-06-06 21:11:19,337] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Attempt to heartbeat failed since group is rebalancing (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.337364+00:00 app[web.2]: [2022-06-06 21:11:19,337] INFO [Worker clientId=connect-1, groupId=connect-demo-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.342703+00:00 app[web.2]: [2022-06-06 21:11:19,342] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Successfully joined group with generation Generation{generationId=39, memberId='connect-1-id-1', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.347949+00:00 app[web.2]: [2022-06-06 21:11:19,347] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Successfully synced group in generation Generation{generationId=39, memberId='connect-1-id-1', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.342608+00:00 app[web.3]: [2022-06-06 21:11:19,342] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Successfully joined group with generation Generation{generationId=39, memberId='connect-1-id-2', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.347995+00:00 app[web.3]: [2022-06-06 21:11:19,347] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Successfully synced group in generation Generation{generationId=39, memberId='connect-1-id-2', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.339182+00:00 app[web.1]: [2022-06-06 21:11:19,339] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Attempt to heartbeat failed since group is rebalancing (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.339236+00:00 app[web.1]: [2022-06-06 21:11:19,339] INFO [Worker clientId=connect-1, groupId=connect-demo-group] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.341477+00:00 app[web.1]: [2022-06-06 21:11:19,341] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Successfully joined group with generation Generation{generationId=39, memberId='connect-1-id-3', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
2022-06-06T21:11:19.346767+00:00 app[web.1]: [2022-06-06 21:11:19,346] INFO [Worker clientId=connect-1, groupId=connect-demo-group] Successfully synced group in generation Generation{generationId=39, memberId='connect-1-id-3', protocol='sessioned'} (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
Matt
  • 41
  • 6
  • 1
    @OneCricketeer ah.. the `sql-server-connector/status` shows the SSL handshake failed. I'm using **docker** and I've provided **both** the `CONNECT_SSL_*`, `CONNECT_CONSUMER_SSL_*` `CONNECT_PRODUCER_SSL_*` env config well as the `database.history.consumer.ssl.*` and `database.history.producer.ssl.*`.. in the connector POST request.. any advice where I need to configure SSL in addition/instead of this?? – Matt Jun 07 '22 at 19:16
  • @OneCricketeer I'm using Heroku Kafka and I have three certs: `client_cert.pem, client_key.pem, trusted_cert.pem`. Connect and Kafka complete the ssl handshake.. any advice on how to debug this issue? Your `/status/` endpoint was extremely helpful.. any helpful tips on configuring SSL on the connectors themselves? – Matt Jun 07 '22 at 19:32
  • @OneCricketeer [another SO post](https://stackoverflow.com/questions/55626057/error-connecting-to-cloud-sql-with-ssl-using-debezium) seems to use `database.history.ssl.*` instead of the `database-history.consumer.ssl*` and `database-history.consumer.ssl*` - is this the correct way? [Debezium SqlServer docs](https://debezium.io/documentation/reference/stable/connectors/sqlserver.html) mention using the latter.. – Matt Jun 07 '22 at 19:35
  • That post is 3yrs old. Debezium has had several releases since then, and properties may have changed – OneCricketeer Jun 07 '22 at 19:45
  • I know there are some JVM flags for debugging SSL connections. I think a variable like `KAFKA_OPTS: "-Djavax.net.debug=all"` will enable it – OneCricketeer Jun 07 '22 at 20:11
  • @OneCricketeer Thanks so much for your help here (don't worry about docker, ill worry about converting). I'll enable that flag - I keep seeing `Failed to construct kafka producer` ... `caused by: Failed to load SSL keystore /app/.keystore.jks of type JKS` ... `failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded` I use keytool to turn my .pem's into `/app/.keystore.jks` and `/app/.truststore.jks` for Kafka Connect.. should I not use this same _working_ Kafka Connect keystore for the connectors? – Matt Jun 07 '22 at 20:33
  • You should be able to keep the files as PEM https://issues.apache.org/jira/browse/KAFKA-10338 – OneCricketeer Jun 07 '22 at 20:34
  • @OneCricketeer - Ive seen this [codingharbor post on PEM certs](https://codingharbour.com/apache-kafka/using-pem-certificates-with-apache-kafka/) which outlines two ways to use them.. I already use the .jks certs for Kafka Connect - and that works. Are you recommending migrating the **connector** to PEM certs or _both **connect** and the *connector**_? – Matt Jun 07 '22 at 20:49
  • @OneCricketeer When I added `KAFKA_OPTS: "-Djavax.net.debug=all"` the logs are quite hard to intrepret.. Is the correct way to set up SSL - configure SSL on Kafka Connect and inside Kafka Connect for producers/consumers - then configure `database.history` for producer/consumers inside the connector? – Matt Jun 07 '22 at 21:00
  • IMO, you should only need to set client properties within the connectors when they actually need it (e.g. MirrorMaker connects to other SSL clusters). So, in theory, only the env-vars/connect-distrubuted.properties should need the values, otherwise. Like I said, I haven't use Connect with SSL, but here's the docs https://docs.confluent.io/platform/current/connect/security.html – OneCricketeer Jun 07 '22 at 21:08
  • @OneCricketeer - I may have found the issue.. I see the **same** output when running `keytool -list -v ... -storepass $KEYSTOREPASS` for both `/app/.keystore.jks` and `.keystore.jks` . The sameis true for `/app/.truststore.jks` and `.truststore.jks`. The path I provide **Kafka Connect** is `/app/.keystore.jks` and `/app/.truststore.jks`. The weird thing is, `ls` for `/app/` doesnt list `.keystore.jks` or `.truststore.jks`... any advice here? – Matt Jun 07 '22 at 21:51
  • If your working directory is `/app`, then the output will be the same, so I'm not sure what you mean. `ls` never shows dot-files. You don't need the leading dot on the files at all (Java doesnt care one way or the other), but you want to use `ls -la /app` – OneCricketeer Jun 08 '22 at 15:36
  • @OneCricketeer `-la` showed neither. I moved to using keytool inside my docker image and now Im not getting an "obvious" SSL error, but Im getting "unrecoverable error - timeout fetching topic metadata".. – Matt Jun 08 '22 at 15:45
  • Hey @OneCricketeer, how can I accept your responses here as an answer? Ive verified SSL w/ debug.. could you take a look at my [new question w/ topic metadata](https://stackoverflow.com/questions/72551889/kafka-connect-fails-on-topic-metadata)? – Matt Jun 08 '22 at 20:33
  • If you've solved the issue, feel free to post your own answer and accept that. – OneCricketeer Jun 09 '22 at 18:31

0 Answers0