6

kafka connect 5.4 only one connector, one worker and use connect-distributed.

Below is the error message:

[2020-06-22 19:09:58,700] ERROR [Worker clientId=connect-1, groupId=test-cluster] 
Uncaught exception in herder work thread, exiting:  (org.apache.kafka.connect.runtime.distributed.DistributedHerder:290)
org.apache.kafka.connect.errors.ConnectException: Error while attempting to create/find topic(s) 'test-connect-offsets'
    at org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:262)
    at org.apache.kafka.connect.storage.KafkaOffsetBackingStore$1.run(KafkaOffsetBackingStore.java:99)
    at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:128)
    at org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
    at org.apache.kafka.connect.runtime.Worker.start(Worker.java:186)
    at org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:121)
    at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:277)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
    at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
    at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
    at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
    at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
    at org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:229)
    ... 11 more
Caused by: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
GodBlessYou
  • 520
  • 1
  • 6
  • 17

4 Answers4

21

For Kafka Connect to run in distributed mode it uses three topics that are stored on the Kafka cluster and hold information about configuration etc. You need to set in the Kafka Connect worker properties:

config.storage.replication.factor=1
offset.storage.replication.factor=1
status.storage.replication.factor=1

If you're using the Docker image then you need to set the environment variables to override these, which in Docker Compose looks like:

CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"

Ref: Configuring Kafka Connect distributed workers

Robin Moffatt
  • 30,382
  • 3
  • 65
  • 92
  • 2
    The default of 3 is dumb in a Docker container. Most people will start by getting one server working first and then scale up later. – Ryan Feb 15 '21 at 21:19
  • 3
    Flip it on its head and someone picking that container up for Production use would say it was "dumb" to set it to 1 by default ‍♂️ :-D – Robin Moffatt Feb 15 '21 at 22:26
  • 2
    You start by crawling, then walking, then running. You don't start with the most complicated configuration by default. If you're intention is to get started (everyone was a new person at some point), you want it easy with sensible defaults. If you are in the smaller group of people who decide to scale up, then you're likely not surprise you'll need to set a config that says use a higher replication factor. The confluent container forces users to understand way more than they should up-front to get started. Unlike the debezium container for example – Ryan Feb 17 '21 at 14:22
1

In addition to what Robin Moffatt answered, you also need to set in Docker:

CONNECT_CONFLUENT_TOPIC_REPLICATION_FACTOR: 1

And you may also want to change the Connector's setting like below:

"topic.creation.default.replication.factor": "1"

(This is Debezium's setting)

Taku
  • 5,639
  • 2
  • 42
  • 31
0

For me, using Kafka docker.io/bitnami/kafka:3.5 image, these two solves this problem:

default.replication.factor=1
offsets.topic.replication.factor=1

Actually I see in kafka log that it encounters problem while creating the __consumer_offset and my.app topics. The first is for my.app topic and the second is for offset topic creation.

Of course in my case I am only using one instance/container, i.e., localhost:9092.

WesternGun
  • 11,303
  • 6
  • 88
  • 157
-1

it works after adding below to config.properties

offsets.topic.replication.factor=1
config.storage.replication.factor=1
offset.storage.replication.factor=1
status.storage.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
GodBlessYou
  • 520
  • 1
  • 6
  • 17
  • 1
    You've found the answer but the additional configuration options you've listed are not applicable and would be misleading for people finding this. – Robin Moffatt Jun 23 '20 at 08:27