0

I am running into an issue trying to replicate from one confluent cloud cluster to another. I have been following the confluent documentation on how to do this, however I am running into consistent failure that I have to assume is caused by a mistake in my config.

The config is as follows:

{
  "name": "kafka_topics_replication",
  "config": {
    "name": "kafka_topics_replication",
    "connector.class": "io.confluent.connect.replicator.ReplicatorSourceConnector",
    "topic.whitelist": "topics",
    "src.kafka.bootstrap.servers": "source-broker:9092",
    "src.kafka.security.protocol": "SASL_SSL",
    "src.kafka.security.mechanism": "PLAIN",
    "src.kafka.client.id": "src-to-dst-replicator",
    "src.kafka.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"src-username\" password=\"src-password\" serviceName=\"Kafka\";",
    "confluent.topic.replication.factor": "3",
    "dest.topic.replication.factor": "3",
    "dest.kafka.bootstrap.servers": "dest-broker.cloud:9092",
    "dest.kafka.security.protocol": "SASL_SSL",
    "dest.kafka.sasl.mechanism": "PLAIN",
    "dest.kafka.client.id": "src-to-dst-replicator",
    "dest.kafka.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"dst-username\" password=\"dst-password\" serviceName=\"Kafka\";"
  },
  "tasks": [],
  "type": "source"
}

The connector starts, but keeps logging the following error:

[2020-07-14 14:45:15,568] WARN [kafka_topics_replication|worker] [AdminClient clientId=src-to-dst-replicator] Error connecting to node source-broker:9092 (id: -1 rack: null) (org.apache.kafka.clients.NetworkClient:969)
java.io.IOException: Channel could not be created for socket java.nio.channels.SocketChannel[closed]
    at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:348)
    at org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
    at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:964)
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:294)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1018)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1260)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1203)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
    at org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:228)
    at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:338)
    ... 8 more
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
    at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.firstPrincipal(SaslClientAuthenticator.java:616)
    at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.<init>(SaslClientAuthenticator.java:200)
    at org.apache.kafka.common.network.SaslChannelBuilder.buildClientAuthenticator(SaslChannelBuilder.java:274)
    at org.apache.kafka.common.network.SaslChannelBuilder.lambda$buildChannel$1(SaslChannelBuilder.java:216)
    at org.apache.kafka.common.network.KafkaChannel.<init>(KafkaChannel.java:142)
    at org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:224)
    at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:338)
    at org.apache.kafka.common.network.Selector.registerChannel(Selector.java:329)
    at org.apache.kafka.common.network.Selector.connect(Selector.java:256)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:964)
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:294)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1018)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1260)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1203)
    at java.lang.Thread.run(Thread.java:748)

Googling around hasn't really turned up anything that seems applicable to this scenario. Fingers crossed that someone here can at least point me in the right direction.

Thanks, -Ryan

Ryan
  • 193
  • 2
  • 2
  • 14

1 Answers1

0

Your connector configuration seems to be missing some properties such as:

  • src.kafka.ssl.endpoint.identification.algorithm=https
  • dest.kafka.ssl.endpoint.identification.algorithm=https
  • src.kafka.request.timeout.ms=20000
  • dest.kafka.request.timeout.ms=20000
  • src.kafka.retry.backoff.ms=500
  • dest.kafka.retry.backoff.ms=500

Also, the properties src.kafka.sasl.jaas.config and dest.kafka.sasl.jaas.config seems to be using a wrong format for the value.

Instead of:

org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<CLUSTER_API_KEY>\" password=\"<CLUSTER_API_SECRET>\" serviceName=\"Kafka\";

Use:

"org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<CLUSTER_API_KEY>\" password=\"<CLUSTER_API_SECRET>\";"

You can find more information about this configuration here.

Ricardo Ferreira
  • 1,236
  • 3
  • 6