Deployed a 3 node kafka cluster in kubernetes using a statefulset using Kraft.
Within the cluster, the configuration is as follows (pseudo-code):
- controller quorum: 0@kafka-0,1@kafka-1,2@kafka-2
- listeners: broker://:9092,controller://:9093,LOCAL://localhost:9094
- advertised_listeners: broker://:9092,controller://:9093,LOCAL://localhost:9094
All nodes are broker/controller
Within the kubernetes cluster, this kraft configuration is working flawlessly. The cluster stands up and services pointing to all three brokers are able to produce and consume messages as expected.
However, to access the kafka cluster from outside Kubernetes, I need to port-forward to gain access (or otherwise ingress).
Since Kraft utilizes all controllers/brokers as "leader" nodes as per the documentation, why am I unable to connect to a single node and get full access to the partitions.
For example, if I port forward kafka-0 and connect to it, I have access to create and list topics.
However, if I create a topic with 3 partitions, they might be configured as follows:
- partition 0 : leader 1 : replicas 3
- partition 1 : leader 2 : replicas 3
- partition 2 : leader 0 : replicas 3
If I port forward kafka-0 (which maps to leader 0 in the above example), I am able to push a message ONLY to partition 2. If I attempt to push a message to partition 0 or partition 1, I receive the following error NOT_ENOUGH_REPLICAS.
I would've expected the broker/controller node I port-forwarded to forward the produced message to the correct broker/controller that owned the partition, in a proxy-like pattern.
However, this was the not the case.
What actually happens when you specify a subset of the kafka nodes in your connection string (bootstrap-servers for example) under Kraft?